<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Big Data</journal-id>
<journal-title>Frontiers in Big Data</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Big Data</abbrev-journal-title>
<issn pub-type="epub">2624-909X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fdata.2023.974072</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Big Data</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Personalized diversification of complementary recommendations with user preference in online grocery</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Ma</surname> <given-names>Luyi</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1800687/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Sinha</surname> <given-names>Nimesh</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Cho</surname> <given-names>Jason H. D.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Kumar</surname> <given-names>Sushant</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1676033/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Achan</surname> <given-names>Kannan</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Walmart Global Tech</institution>, <addr-line>Sunnyvale, CA</addr-line>, <country>United States</country></aff>
<aff id="aff2"><sup>2</sup><institution>DoorDash</institution>, <addr-line>San Francisco, CA</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Dawei Yin, Baidu, China</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Yanjie Fu, University of Central Florida, United States; Xianzhi Wang, University of Technology Sydney, Australia</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Nimesh Sinha <email>nimesh280&#x00040;gmail.com</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Recommender Systems, a section of the journal Frontiers in Big Data</p></fn></author-notes>
<pub-date pub-type="epub">
<day>22</day>
<month>03</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>6</volume>
<elocation-id>974072</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>06</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>27</day>
<month>02</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2023 Ma, Sinha, Cho, Kumar and Achan.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Ma, Sinha, Cho, Kumar and Achan</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Complementary recommendations play an important role in surfacing the relevant items to the customers. In the cross-selling scenario, some customers might present more exploratory shopping behaviors and prefer more diverse complements, while other customers show less exploratory (or more conventional) shopping behaviors and want to have a deep dive of less diverse types of complements. The existence of two distinct shopping behaviors reflects users&#x00027; different shopping intents and requires complementary recommendations to be adaptable based on the user&#x00027;s shopping intent. Although many studies focus on improving the recommendations through post-processing techniques, such as user-item-level personalized ranking and diversification of recommendations, they fail to address such a requirement. First, many user-item-level personalization methods cannot explicitly model the preference of users in two types of shopping behaviors and their intent on the corresponding complementary recommendations. Second, most of the diversification methods increase the heterogeneity of the recommendations. However, users&#x00027; intent on conventional complementary shopping requires more homogeneity of the recommendations, which is not explicitly modeled. The present study tries attempts to solve these problems by the personalized diversification strategies for complementary recommendations. To address the requirement of modeling heterogenized and homogenized complementary recommendations, we propose two diversification strategies, heterogenization and homogenization, to re-rank complementary recommendations based on the determinantal point process (DPP). We use transaction history to estimate users&#x00027; intent on more exploratory or more conventional complementary shopping. With the estimated user intent scores and two diversification strategies, we propose an algorithm to personalize the diversification strategies dynamically. We demonstrate the effectiveness of our re-ranking algorithm on the publicly available Instacart dataset.</p></abstract>
<kwd-group>
<kwd>diversification</kwd>
<kwd>re-ranking</kwd>
<kwd>recommender system (RS)</kwd>
<kwd>complementary recommendation</kwd>
<kwd>personalization</kwd>
</kwd-group>
<counts>
<fig-count count="3"/>
<table-count count="2"/>
<equation-count count="11"/>
<ref-count count="20"/>
<page-count count="9"/>
<word-count count="5964"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Recommender system is an essential part of the e-commerce business. Recommending relevant items to customers makes the shopping experience more comfortable and time-saving. Online grocery platforms also have a wide variety of recommendation systems placed at various sections of their websites to improve customer journey. One of the important sections is related to complementary item recommendations for cross-selling. Given a query item, complementary recommendations show the query item&#x00027;s complements to customers who frequently co-purchase to fulfill a particular demand. For example, when a customer purchases a bag of <monospace>hot dog</monospace>, she/he might also want to purchase a bag of <monospace>hot dog buns</monospace> together. Showing <monospace>hot dog bun</monospace> for <monospace>hot dog</monospace> as a complementary item recommendation will improve the shopping experience.</p>
<p>However, it is non-trivial to effectively recommend complementary items for a given query item when the users show different co-purchase behaviors, i.e., more exploratory co-purchase or more conventional co-purchase as shown in <xref ref-type="fig" rid="F1">Figure 1</xref>. When a user prefers more exploratory co-purchases, she/he might also prefer to see more heterogeneous item recommendations complementary to the query item because of the intent on exploration. When a user prefers more conventional co-purchases, she/he might favor less diversified or even homogeneous item recommendations complementary to the query item because of the intent on classic combination with a deep comparison. In this case, the diversity of complementary item recommendations should be adaptable to the co-purchase pattern (exploratory vs. conventional) with personalization. Such an adaptation requires not only modeling the diversification of complementary item recommendations for more exploratory shopping intent, but also properly homogenizing the recommendation for the conventional shopping intent. Furthermore, we need to personalize the adaptation for users by their shopping intent.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Exploratory co-purchase behaviors vs. Conventional co-purchase behaviors.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-974072-g0001.tif"/>
</fig>
<p>All these problems become more challenging for online grocery because grocery items are deeply involved in our daily life under so many co-purchase scenarios, such that co-purchase patterns are more diverse and flexible compared with other online marketplaces. For example, in online grocery, <monospace>tortilla chips</monospace> have many food complements, such as <monospace>salsa dip</monospace>, <monospace>guacamole dip</monospace>, and <monospace>soft drink</monospace> and also non-food complements such as <monospace>chip bowl</monospace>, while in online electronics e-commerce, television (TV) might only have fixed complements related to television shopping.</p>
<p>Diversity of item recommendations could be quantified by item attributes. One of the commonly used attributes is the hierarchical classification of an item in the taxonomy. <xref ref-type="fig" rid="F2">Figure 2</xref> presents an example of grocery item taxonomy, with <italic>item department, item category, item type</italic>, and <italic>individual items</italic>. While items from the same category (one level of item classification in the taxonomy) generally share a similar item functionality (e.g., items from the milk category), the department level classification summarizes the diversity of the customer shopping intent better because each department could represent an aspect of daily shopping. In the aforementioned example of <monospace>tortilla chips</monospace>, the customer needs to purchase items from multiple departments such as <italic>Deli</italic> and <italic>Beverage</italic>. In our case, we define the diversity of complementary items at the department level (i.e., only items from two different departments contribute to the increment in the recommendation diversity).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>An example of grocery item taxonomy.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-974072-g0002.tif"/>
</fig>
<p>Many complementary item recommendation models mainly focused on learning the complementarity between items rather than the personalized adjustment of diversity of complementary item recommendations (McAuley et al., <xref ref-type="bibr" rid="B12">2015</xref>; Barkan and Koenigstein, <xref ref-type="bibr" rid="B2">2016</xref>; Wan et al., <xref ref-type="bibr" rid="B14">2018</xref>; Wang et al., <xref ref-type="bibr" rid="B15">2018</xref>; Zhang et al., <xref ref-type="bibr" rid="B19">2018</xref>; Xu et al., <xref ref-type="bibr" rid="B18">2019</xref>; Liu et al., <xref ref-type="bibr" rid="B9">2020</xref>). Diversification of complementary item recommendations has been recently addressed in Hao et al. (<xref ref-type="bibr" rid="B5">2020</xref>) by considering the item type and categories. Unfortunately, it cannot distinguish the demand of users&#x00027; shopping intent by surfacing more heterogeneous complementary items for exploratory shopping intent or more homogeneous complementary items for conventional shopping intent.</p>
<p>To address these challenges, we utilize the complementary item recommendations by the existing models and deploy the re-ranking strategy to balance out both the exploratory and the conventional complementary shopping intent. To illustrate the necessity and effectiveness of adjustable diversification of complementary item recommendations, we study two diversification strategies, <bold>heterogenization</bold> and <bold>homogenization</bold>. For heterogenization, we focus on diversified complementary item recommendations. We use the re-ranking strategy based on determinantal point process (DPP) to diversify our complementary item recommendations by the existing models. The more diversified complementary item recommendations can fit the exploratory shopping intent. For homogenization, we enforce the homogeneity of the complementary item recommendations by a re-ranking strategy based on a modified DPP. In this case, the modified DPP will encourage the homogeneity of the recommendations for conventional shopping intent.</p>
<p>To further address the personalized adjustment of diversification strategies, we estimate the user shopping intent (exploratory vs. conventional) by user shopping history. The estimated user shopping intent will guide the recommender system to select the proper diversification re-ranking (heterogenization vs. homogenization) for the complementary item recommendations and address the user shopping intent. We summarize our contributions as follows:</p>
<list list-type="bullet">
<list-item><p>We introduce the concept of exploratory and non-exploratory shopping demands from customer behavior to the modeling problem of complementary item recommendations, which has not been addressed before.</p></list-item>
<list-item><p>We further address the requirement of personalizing the demand of exploratory and non-exploratory recommendations based on the diversity of recommendations and proposed a personalized ranking model for complementary item recommendations for the dynamic adjustment.</p></list-item>
<list-item><p>We show the effectiveness of our proposed solution and conducted case studies on customer shopping intent on the publicly available dataset.</p></list-item>
</list>
<p>The rest of this article is structured as follows: we summarize the related articles in Section 2 and introduce the preliminaries of our model in Section 3. After that, we propose our model in Section 4. We provide the evaluation and result analysis in Section 5 and conclude our article in Section 6.</p></sec>
<sec id="s2">
<title>2. Related works</title>
<sec>
<title>2.1. Complementary recommendations</title>
<p>Many studies have focused on the complementary item recommendations. Embedding-based methods, such as Barkan and Koenigstein (<xref ref-type="bibr" rid="B2">2016</xref>) and Wan et al. (<xref ref-type="bibr" rid="B14">2018</xref>), collaboratively learn the complementary item relationship from the co-purchase data. Another way of using the co-purchase data is to construct the co-purchase graph and apply graph neural networks on it (McAuley et al., <xref ref-type="bibr" rid="B12">2015</xref>; Wang et al., <xref ref-type="bibr" rid="B15">2018</xref>; Liu et al., <xref ref-type="bibr" rid="B9">2020</xref>). They use the co-purchase records as labels for link predictions based on the distance between item embeddings. In addition to vector item embeddings, Gaussian embedding is also explored in Ma et al. (<xref ref-type="bibr" rid="B11">2021</xref>) to address the noise in the co-purchase data for better complementarity learning. In addition to the co-purchase data, many types of auxiliary data are incorporated into the modeling, such as the multimodal data of items (Zhang et al., <xref ref-type="bibr" rid="B19">2018</xref>) and the shopping context (Xu et al., <xref ref-type="bibr" rid="B18">2019</xref>). Diversified complementary recommendation is studied in Hao et al. (<xref ref-type="bibr" rid="B5">2020</xref>) by leveraging the product-type information to improve the diversity. However, it focuses on the diversified recall process rather than the ranking process as our article targets.</p>
<p>In our study, we leverage the <bold>triple2vec</bold> in Wan et al. (<xref ref-type="bibr" rid="B14">2018</xref>) to learn the complementary item embedding due to its effectiveness in learning the item vector embeddings from the co-purchase data.</p>
</sec>
<sec>
<title>2.2. Recommendation diversification</title>
<p>For a long time, not much importance was given to diversity in the recommendations, as it is challenging to achieve both high accuracy and diversity at the same time. This is called <italic>accuracy diversity dilemma</italic> (Liu et al., <xref ref-type="bibr" rid="B8">2012</xref>). Novelty and diversity of items have been improved by penalizing accuracy (D&#x000ED;ez et al., <xref ref-type="bibr" rid="B4">2019</xref>). Diversity has also been captured in an entropy regularizer (Qin and Zhu, <xref ref-type="bibr" rid="B13">2013</xref>). Post-processing methods for diversity have been proposed to improve the personalized recommendations generated by collaborative filtering (Adomavicius and Kwon, <xref ref-type="bibr" rid="B1">2012</xref>). Determinantal point process (DPP) has been used for making personalized diversified recommendations and DPP models are probabilistic models with a lot of applications (Kulesza and Taskar, <xref ref-type="bibr" rid="B7">2012</xref>). They have been incorporated with a tunable parameter allowing the users to smoothly control the level of diversity in recommendations and also, applied to large-scale scenarios with faster inference (Wilhelm et al., <xref ref-type="bibr" rid="B16">2018</xref>). Deep reinforcement learning has utilized DPP to promote diversity to generate diverse, while relevant item recommendations. DPP kernel matrix is maintained for each user, which is constructed from two parts: a fixed similarity matrix capturing item-item similarity and the relevance of items dynamically learnt through an actor-critic reinforcement learning framework (Liu et al., <xref ref-type="bibr" rid="B10">2019</xref>). However, they fail to give much attention to maintaining the delicate balance between the requirement of distinct diversity strategies for the exploratory and conventional shopping intents. Our proposed method focuses on the combined re-ranking strategy for exploratory and conventional user shopping intent on complementary recommendations.</p></sec>
</sec>
<sec id="s3">
<title>3. Preliminaries</title>
<p>In this section, we first revisit the base model for complementary item recommendations, <bold>triple2vec</bold> (Wan et al., <xref ref-type="bibr" rid="B14">2018</xref>), and for generating the item embedding used for diversification. We choose <bold>triple2vec</bold> as our baseline model for complementary item recommendation because we do not assume that transaction data (i.e., product IDs and user IDs) are the only available input due to its high accessibility for e-commerce systems and that there are no additional contexts such as click/view and user profiles (e.g., age and gender). Then, we introduce DPP for recommendation diversification and its basic setting.</p>
<sec>
<title>3.1. Skip-gram-based item embedding and <italic>triple2vec</italic></title>
<p>Skip-gram-based methods for item embedding leverage the item co-occurrence signal (e.g., co-purchase of items). Models for complementary item recommendations such as McAuley et al. (<xref ref-type="bibr" rid="B12">2015</xref>) and Barkan and Koenigstein (<xref ref-type="bibr" rid="B2">2016</xref>) exactly use the item co-occurrence signal to model item complementarity. <bold>triple2vec</bold> in Wan et al. (<xref ref-type="bibr" rid="B14">2018</xref>) introduced the cohesion of (<italic>item, item</italic>,  and <italic>user</italic>) triplets that reflect the co-purchase of two items by the same user in the same basket. This technique improves the performance of complementary item recommendations and <bold>triple2vec</bold> achieves the state-of-the-art performance. As we focus on the post-processing of the recommendations, we decide to leverage the item representations learned by <bold>triple2vec</bold> to generate item pools for downstream applications.</p>
<p>In <bold>triple2vec</bold>, a triplet of (<italic>q, r, u</italic>), <italic>q</italic> &#x02208; <italic>V, r</italic> &#x02208; <italic>V, u</italic> &#x02208; <italic>U</italic>, represents the user-item and the item-item relationship, where <italic>V</italic> is the set of items and <italic>U</italic> is the set of users. Here, <italic>q</italic> and <italic>r</italic> are two items purchased by the user <italic>u</italic> in the same basket. Particularly, we refer <italic>q</italic> to the query item and <italic>r</italic> to the recommended item. The relationship between <italic>q</italic> and <italic>r</italic> can be viewed in the way that <italic>r</italic> is the recommended complementary item for the query item <italic>q</italic>. The cohesion of (<italic>q, r, u</italic>) in <bold>triple2vec</bold> is computed by Equation (1), where <italic>f</italic><sub><italic>q</italic></sub>, <italic>g</italic><sub><italic>r</italic></sub> are two sets of representations for items (<italic>q, r</italic>) and <italic>h</italic><sub><italic>u</italic></sub> is the user embedding.</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mover class="msup"><mml:mrow><mml:mover accent="false"><mml:mrow><mml:msubsup><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0FE37;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:mover><mml:mo>&#x0002B;</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mstyle displaystyle="true"><mml:munder accentunder="false"><mml:mrow><mml:msubsup><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>h</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msubsup><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>h</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0FE38;</mml:mo></mml:munder></mml:mstyle></mml:mrow><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:munder></mml:mstyle></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><italic>x</italic> and <italic>y</italic> in Equation (1) indicate the item-to-item complementarity and user-to-item compatibility, respectively. The loss function <inline-formula><mml:math id="M2"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow></mml:math></inline-formula> in Equation (2) computes the likelihood of all possible triplets <inline-formula><mml:math id="M3"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow></mml:math></inline-formula> and is optimized to learn representations of items and users.</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M4"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:mi mathvariant="-tex-caligraphic">L</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mi mathvariant="-tex-caligraphic">T</mml:mi></mml:mrow></mml:mrow></mml:munder></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo class="qopname">log</mml:mo><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>|</mml:mo><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mo class="qopname">log</mml:mo><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi><mml:mo>|</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mo class="qopname">log</mml:mo><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>|</mml:mo><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Here, <inline-formula><mml:math id="M5"><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>|</mml:mo><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:munder><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></inline-formula>, <inline-formula><mml:math id="M6"><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi><mml:mo>|</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:munder><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></inline-formula>, and <inline-formula><mml:math id="M7"><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mo>|</mml:mo><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:munder><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mo>,</mml:mo><mml:msup><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></inline-formula>.</p>
<p>We leverage <bold>triple2vec</bold> to learn item representations and generate item pools of complementary recommendations for downstream processes. To recall the item pool of complementary recommendations, we consider the inner product score <inline-formula><mml:math id="M8"><mml:msubsup><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> for two items <italic>q, r</italic>. For each query item <italic>q</italic>, we select a pool of items <italic>R</italic> &#x0003D; {<italic>r</italic><sub>1</sub>, &#x02026;, <italic>r</italic><sub><italic>m</italic></sub>} with the highest score of <inline-formula><mml:math id="M9"><mml:msubsup><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>.</p>
</sec>
<sec>
<title>3.2. Recommendation diversification and determinantal point process</title>
<p>Improving the diversity of recommendations benefits the recommender systems because it introduces novelty and better topic coverage (Ziegler et al., <xref ref-type="bibr" rid="B20">2005</xref>). Many studies on diversification follow the setting of bi-criterion optimization problem, which balances the relevance (between the query and recalled elements) and diversity (Wu et al., <xref ref-type="bibr" rid="B17">2019</xref>). Particularly, diversity can be further divided into two types, (1) individual diversity <xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> and (2) aggregate diversity <xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> (Wu et al., <xref ref-type="bibr" rid="B17">2019</xref>). We focus on the individual diversity in this study to adjust the diversity of complementary recommendations, given a user&#x00027;s intent.</p>
<p>The determinantal point process (DPP) is a probabilistic model that is good at modeling repulsion. The recent study (Chen et al., <xref ref-type="bibr" rid="B3">2018</xref>) applies DPP to diversification of item recommendations and develops the fast greedy MAP inference to generate diversified recommendations. Our study is based on DPP with the fast greedy MAP inference in Chen et al. (<xref ref-type="bibr" rid="B3">2018</xref>). We introduce details of DPP and the fast greedy MAP inference following the notation in Chen et al. (<xref ref-type="bibr" rid="B3">2018</xref>). For the rest of our article, we denote the fast greedy MAP inference as <bold>FG-MAP</bold>.</p>
<p>Formally, the DPP on a discrete set <italic>Z</italic> &#x0003D; {1, 2, &#x02026;, <italic>M</italic>} is a probability measure <inline-formula><mml:math id="M10"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:math></inline-formula> on 2<sup>|<italic>Z</italic>|</sup> number of subsets of <italic>Z</italic>, where |<italic>Z</italic>| is the number of elements in <italic>Z</italic>. Because the empty set is also a subset of <italic>Z</italic>, when <inline-formula><mml:math id="M11"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:math></inline-formula> does not give zero probability to the empty set, there exist a square, positive semidefinite (PSD) and real matrix <bold>L</bold> &#x02208; &#x0211D;<sup><italic>M</italic>&#x000D7;<italic>M</italic></sup>, which satisfies Equation (3) for each subset <italic>Y</italic> &#x02286; <italic>Z</italic>.</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M12"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0221D;</mml:mo><mml:mo class="qopname">det</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>L</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>Y</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>L</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>Y</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mi>Y</mml:mi><mml:mo>|</mml:mo><mml:mo>&#x000D7;</mml:mo><mml:mo>|</mml:mo><mml:mi>Y</mml:mi><mml:mo>|</mml:mo></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><bold>L</bold> serves as a kernel matrix indexed by the elements in <italic>Z</italic> and det(<bold>L</bold><sub><italic>Y</italic></sub>) is the determinant of the sub-matrix extracted from <bold>L</bold> based on elements in <italic>Y</italic>. Equation (3) indicates that the probability of a subset <italic>Y</italic> is proportional to the determinant of the corresponding sub-matrix of the PSD kernel. The MAP inference of the aforementioned DPP <inline-formula><mml:math id="M13"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow></mml:math></inline-formula> on <italic>Z</italic> is defined in Equation (4).</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M14"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo class="qopname">arg</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo class="qopname">max</mml:mo></mml:mrow><mml:mrow><mml:mi>Y</mml:mi><mml:mo>&#x02286;</mml:mo><mml:mi>Z</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:mo class="qopname">det</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>L</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>Y</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Unlike the other inference on DPP, the MAP inference of DPP is NP-hard. The algorithm <bold>FG-MAP</bold> approximates the MAP inference in a greedy approach. Equation (5) shows how to greedily select the next candidate item <italic>j</italic> that is added to the existing growing subset <italic>Y</italic><sub><italic>g</italic></sub> &#x02286; <italic>Z</italic> built from the previous iterations. After the current iteration, <italic>Y</italic><sub><italic>g</italic></sub> grows and <italic>Y</italic><sub><italic>g</italic></sub>: &#x0003D; <italic>Y</italic><sub><italic>g</italic></sub> &#x022C3; {<italic>j</italic>} <xref ref-type="fn" rid="fn0003"><sup>3</sup></xref>.</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M15"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mo class="qopname">arg</mml:mo><mml:mo class="qopname">max</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi>Z</mml:mi><mml:mo>\</mml:mo><mml:msub><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo class="qopname">log</mml:mo><mml:mo class="qopname">det</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>L</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub><mml:mo>&#x022C3;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mo class="qopname">log</mml:mo><mml:mo class="qopname">det</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>L</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mi>g</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>When <italic>Z</italic> becomes the item pools for complementary recommendations <italic>R</italic> &#x0003D; {<italic>r</italic><sub>1</sub>, &#x02026;, <italic>r</italic><sub><italic>m</italic></sub>} recalled by the item representations (i.e., item embedding learned by <bold>triple2vec</bold>), the DPP on <italic>R</italic> maximizes the <inline-formula><mml:math id="M16"><mml:mrow><mml:mi mathvariant="-tex-caligraphic">P</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> and diversifies the recommendations by selecting <italic>r</italic><sub><italic>i</italic></sub> from <italic>R</italic> iteratively. Now, the kernel matrix <bold>L</bold> could be initialized by the item-to-item similarity matrix based on the item embedding. In our study, we adapt DPP and <bold>FG-MAP</bold>, with <bold>L</bold> defined in Equation (6).</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M17"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mstyle mathvariant="bold"><mml:mtext>L</mml:mtext></mml:mstyle><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:mi>H</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:mi>H</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mi>H</mml:mi><mml:mo>&#x02261;</mml:mo><mml:mrow><mml:mo stretchy="false">{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo stretchy="false">|</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:mi>R</mml:mi></mml:mrow><mml:mo stretchy="false">}</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><italic>H</italic> is a sub-matrix of the item embedding for the item pools <italic>R</italic> recalled by the <bold>triple2vec</bold> model. <italic>g</italic><sub><italic>r</italic><sub><italic>i</italic></sub></sub> is normalized embedding of item <italic>r</italic><sub><italic>i</italic></sub> and the value of <italic>H</italic><sup><italic>T</italic></sup><italic>H</italic> is shifted to ensure <bold>L</bold> is PSD. We only use one set of items embedding from the <bold>triple2vec</bold> model to compute item similarity, as the distance between <italic>f</italic><sub><italic>q</italic></sub> and <italic>g</italic><sub><italic>r</italic></sub> from two sets of embedding represents the complementarity of (<italic>q, r</italic>).</p></sec>
</sec>
<sec id="s4">
<title>4. Diversification strategies</title>
<p>As aforementioned, the diversification strategy for complementary item recommendations in online grocery can fall into two types, heterogenization and homogenization, for exploratory and conventional complementary shopping intent, respectively. In this section, we first introduce the proposed diversification strategies based on DPP. Later, we present our user shopping intent modeling and the selection of diversification strategies with personalization.</p>
<sec>
<title>4.1. Strategy 1: Heterogenization</title>
<p>The heterogenization strategy for complementary item recommendation can be achieved by increasing the diversity in the complementary recommendations <italic>R</italic> recalled by a complementary item recommendation model, i.e., <bold>triple2vec</bold>. It could fulfill the users&#x00027; intent on exploratory shopping by showing more diverse recommendations. We first generate <italic>R</italic> to ensure complementary recommendations and then re-rank items in <italic>R</italic> to surface more diverse but relevant items to the top. If we do not conduct diversification within the pool of pre-selected complementary items, the diversification logic could easily bias irrelevant items. We can re-rank the items in <italic>R</italic> by modifying <bold>FG-MAP</bold> into bi-criterion optimization. Specifically, we consider the score <inline-formula><mml:math id="M18"><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:msubsup><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac></mml:math></inline-formula> for complementarity of (<italic>q, r</italic><sub><italic>i</italic></sub>), where <italic>f</italic><sub><italic>q</italic></sub> and <italic>g</italic><sub><italic>r</italic><sub><italic>i</italic></sub></sub> are normalized item embeddings. Equation (7) shows the modified objective function for diversification re-rank.</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M19"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mo class="qopname">arg</mml:mo><mml:mo class="qopname">max</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:mi>R</mml:mi><mml:mo>\</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mstyle displaystyle="true"><mml:munder accentunder="false"><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0FE38;</mml:mo></mml:munder></mml:mstyle></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">complementarity</mml:mtext></mml:mrow></mml:munder></mml:mstyle></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>&#x0002B;</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mstyle displaystyle="true"><mml:munder accentunder="false"><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo class="qopname">log</mml:mo><mml:mo class="qopname">det</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>L</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mo class="qopname">log</mml:mo><mml:mo class="qopname">det</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>L</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x0FE38;</mml:mo></mml:munder></mml:mstyle></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">increment&#x000A0;of&#x000A0;diversification</mml:mtext></mml:mrow></mml:munder></mml:mstyle></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>At <italic>t</italic>th iteration, <italic>R</italic><sub><italic>t,d</italic></sub>: &#x0003D; <italic>R</italic><sub><italic>t</italic>&#x02212;1, <italic>d</italic></sub> &#x0002B; [<italic>r</italic><sub><italic>j</italic></sub>], where <italic>R</italic><sub>0,<italic>d</italic></sub> &#x0003D; [] and <italic>R</italic><sub><italic>t</italic> &#x02212; 1, <italic>d</italic></sub> &#x0002B; [<italic>r</italic><sub><italic>j</italic></sub>] means the newly selected recommendation <italic>r</italic><sub><italic>j</italic></sub> by the diversification re-rank is inserted at the end of the current item list <italic>R</italic><sub><italic>t</italic>&#x02212;1, <italic>d</italic></sub>. The weight &#x003B1; controls the amount of diversity introduced to the re-ranked item list. Each selected item <italic>r</italic><sub><italic>j</italic></sub> can maximize the combined score of diversity and complementarity. The re-ranked item list <italic>R</italic><sub><italic>d</italic></sub> will surface more diversified recommendation to the top compared with the original item list <italic>R</italic>, in which items are simply sorted by the score <italic>S</italic><sub><italic>q</italic>,<sub><italic>r</italic></sub><sub><italic>i</italic></sub></sub> in descending order.</p>
</sec>
<sec>
<title>4.2. Strategy 2: Homogenization</title>
<p>The homogenization strategy is different from the heterogenization strategy. We need to surface more items that are related to the query items but under the same topic, instead of diverse results. For example, assume a query item <monospace>milk</monospace> has a list of recommendations <italic>R</italic> = {<monospace>eggs</monospace>, <monospace>cheese</monospace>, <monospace>bread</monospace>, <monospace>margarine</monospace>, <monospace>banana</monospace>, <monospace>sausage</monospace>, <monospace>yogurt</monospace>, <monospace>cereal</monospace>}. If we want to address the homogenization strategy, the re-ranked recommendations could be <italic>R</italic><sub><italic>s</italic></sub> = {<monospace>eggs</monospace>, <monospace>cheese</monospace>, <monospace>margarine</monospace>, <monospace>yogurt</monospace>, <monospace>banana</monospace>, <monospace>bread</monospace>, <monospace>sausage</monospace>, <monospace>cereal</monospace>} <xref ref-type="fn" rid="fn0004"><sup>4</sup></xref>. We encourage more homogeneousness in this strategy while keeping the complementary relationship between recommendations and the query item. <italic>R</italic><sub><italic>s</italic></sub> surfaces more items under the Dairy &#x00026; Eggs domain such as cheese and yogurt. The homogenization strategy can be promoted by the similarity of items among the recall set of complementary recommendations. Unlike diversification for the heterogenization strategy which is diverging the item relationship, boosting the homogeneity in the recommendations is more stable. We can mine candidate items in a bigger recall set. Formally, we recall extra complementary items <italic>R</italic><sub><italic>x</italic></sub> &#x0003D; {<italic>r</italic><sub><italic>m</italic>&#x0002B;1</sub>, &#x02026;, <italic>r</italic><sub><italic>n</italic></sub>} and insert them at the end of <italic>R</italic>. The new item list becomes <italic>R</italic> &#x0002B; <italic>R</italic><sub><italic>x</italic></sub>. To force the similarity between recommendations, we modify the kernel <bold>L</bold> in DPP by Equation (8) and apply DPP to the new dissimilarity matrix <bold>L&#x02032;</bold>,</p>
<disp-formula id="E9"><label>(8)</label><mml:math id="M21"><mml:mstyle mathvariant="bold" mathsize="normal"><mml:msup><mml:mi>L</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup></mml:mstyle><mml:mo>=</mml:mo><mml:mstyle mathvariant="bold" mathsize="normal"><mml:mn>1</mml:mn></mml:mstyle><mml:mo>+</mml:mo><mml:mtext>diag</mml:mtext><mml:mo stretchy="false">(</mml:mo><mml:mstyle mathvariant="bold" mathsize="normal"><mml:mi>L</mml:mi></mml:mstyle><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mstyle mathvariant="bold" mathsize="normal"><mml:mi>L</mml:mi></mml:mstyle></mml:math></disp-formula>
<p>Where diag(<bold>L</bold>) is a diagonal matrix with all entries in the main diagonal equal to the diagonal of <bold>L</bold> and <bold>1</bold> is a square matrix with all entries equal to 1. Plug <bold>L&#x02032;</bold> into Equation (7), and we can have a new re-ranking logic on the extended item pool <italic>R</italic> &#x0002B; <italic>R</italic><sub><italic>x</italic></sub>, shown in Equation (9).</p>
<disp-formula id="E10"><label>(9)</label><mml:math id="M22"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mi>r</mml:mi><mml:mi>h</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mi>arg</mml:mi><mml:msub><mml:mi>max</mml:mi><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x02208;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>R</mml:mi><mml:mo>+</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mi>x</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x02216;</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:msub><mml:munder><mml:munder><mml:mrow><mml:mi>&#x003B2;</mml:mi><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="true">&#x0FE38;</mml:mo></mml:munder><mml:mrow><mml:mtext>complementarity</mml:mtext></mml:mrow></mml:munder></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>+</mml:mo><mml:munder><mml:munder><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>log</mml:mi><mml:mi>det</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mstyle mathvariant="bold" mathsize="normal"><mml:msup><mml:mi>L</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup></mml:mstyle><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mi>s</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>log</mml:mi><mml:mi>det</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mstyle mathvariant="bold" mathsize="normal"><mml:msup><mml:mi>L</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup></mml:mstyle><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="true">&#x0FE38;</mml:mo></mml:munder><mml:mrow><mml:mtext>increment&#x000A0;of&#x000A0;similarity&#x000A0;between&#x000A0;recommendations</mml:mtext></mml:mrow></mml:munder></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Here, the parameter &#x003B2; is used to control the degree of similarity between recommendations. At <italic>t</italic>th iteration of Equation (9), <italic>R</italic><sub><italic>t,s</italic></sub>: &#x0003D; <italic>R</italic><sub><italic>t</italic> &#x02212; 1, <italic>s</italic></sub> &#x0002B; [<italic>r</italic><sub><italic>h</italic></sub>], where <italic>R</italic><sub>0, <italic>s</italic></sub> &#x0003D; []. Both Equations (7), (9) can be optimized by the <bold>FG-MAP</bold> algorithm mentioned in Chen et al. (<xref ref-type="bibr" rid="B3">2018</xref>)<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref>.</p>
</sec>
<sec>
<title>4.3. User intent modeling and diversification strategy selection</title>
<p>Only having two re-ranking strategies is not enough because we need to figure out the selection of two diversified re-ranking strategies for a certain user. We leverage the heuristic that users who usually add more diverse items during the next-<italic>k</italic> purchases would prefer more exploratory complementary shopping with the heterogenization strategy, while users who commonly add less diverse items during the next-<italic>k</italic> purchases would prefer more conventional complementary shopping with the homogenization strategy.</p>
<p>Formally, given a query item <italic>q</italic> at time <italic>t</italic> and a list of next-<italic>k</italic> items <italic>B</italic><sub><italic>q</italic></sub> &#x0003D; {<italic>b</italic><sub><italic>t</italic>&#x0002B;1</sub>, &#x02026;, <italic>b</italic><sub><italic>t</italic>&#x0002B;<italic>k</italic></sub>} purchased by a user <italic>u</italic>, we leverage the taxonomy information <italic>tax</italic>(&#x000B7;) <xref ref-type="fn" rid="fn0006"><sup>6</sup></xref> to estimate how much diversity the user <italic>u</italic> prefers. Let <italic>B</italic><sub><italic>T,q</italic></sub> &#x0003D; [<italic>tax</italic>(<italic>b</italic><sub><italic>t</italic>&#x0002B;1</sub>), &#x02026;, <italic>tax</italic>(<italic>b</italic><sub><italic>t</italic>&#x0002B;<italic>k</italic></sub>)] be the list of departments of the next-<italic>k</italic> items purchased by the user and |<italic>B</italic><sub><italic>T,q</italic></sub>| be the number of unique elements in <italic>B</italic><sub><italic>T,q</italic></sub>. We can estimate the degree of diversity for the query item <italic>q</italic> and the user <italic>u</italic> in Equation (10).</p>
<disp-formula id="E12"><label>(10)</label><mml:math id="M24"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>z</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo stretchy="false">|</mml:mo><mml:msub><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>However, the score <italic>z</italic><sub><italic>u,q</italic></sub> is at user-item level and not stable due to the sparsity issue. We then extend it to a score at user-department level to reduce the sparsity, as shown in Equation (11), where <italic>dept</italic><sub><italic>i</italic></sub> is the department <italic>i</italic> and the score <italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub> is an average of score <italic>z</italic><sub><italic>u,q</italic></sub> for any query items satisfying <italic>tax</italic>(<italic>q</italic>) &#x0003D; <italic>dept</italic><sub><italic>i</italic></sub>.</p>
<disp-formula id="E13"><label>(11)</label><mml:math id="M25"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>z</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>d</mml:mi><mml:mi>e</mml:mi><mml:mi>p</mml:mi><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mi>q</mml:mi><mml:mo>|</mml:mo><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>q</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>d</mml:mi><mml:mi>e</mml:mi><mml:mi>p</mml:mi><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>z</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>We treat the score <italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub> as the user intent score of exploratory complementary shopping and apply a threshold <italic>T</italic> &#x02208; [0, 1] to binarize <italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub> learnt from the training data. If <italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub> &#x0003C; <italic>T</italic>, the user <italic>u</italic> prefers the more conventional complementary shopping under the department <italic>dept</italic><sub><italic>i</italic></sub>, otherwise, the user <italic>u</italic> might prefer more exploratory complementary items because <italic>u</italic> tends to add items from different departments during the next-<italic>k</italic> purchases. We can combine the score <italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub> with the heterogenization and homogenization strategies to develop a dynamic re-ranking algorithm for complementary item recommendations (summarized in <xref ref-type="table" rid="A1">Algorithm 1</xref>), It provides either diversified re-ranking strategy based on the user intent on a certain department <italic>dept</italic><sub><italic>i</italic></sub> of the query item <italic>q</italic>.</p>
<table-wrap position="float" id="A1">
<label>Algorithm 1</label>
<caption><p>Dynamic re-ranking of complementary recommendation with user intent.</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td align="left" valign="top" colspan="2"><bold>Require:</bold> <italic>u</italic>, <italic>q</italic>, <italic>R</italic>, <italic>R</italic><sub><italic>x</italic></sub>, &#x003B1;, &#x003B2;, <italic>T</italic>, <italic>z</italic><sub>0</sub>, <italic>k</italic>;</td>
</tr>
<tr>
<td align="left" valign="top" colspan="2"><bold>Ensure:</bold></td>
</tr>
<tr>
<td align="left" valign="top">1:</td>
<td align="left" valign="top"><italic>R</italic><sub><italic>out</italic></sub> &#x0003D; []</td>
</tr>
<tr>
<td align="left" valign="top">2:</td>
<td align="left" valign="top"><italic>dept</italic><sub><italic>i</italic></sub> &#x0003D; <italic>tax</italic>(<italic>q</italic>)</td>
</tr>
<tr>
<td align="left" valign="top">3:</td>
<td align="left" valign="top"><bold>if</bold> <italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub> available <bold>then</bold></td>
</tr>
<tr>
<td align="left" valign="top">4:</td>
<td align="left" valign="top">&#x000A0;&#x000A0;&#x000A0;use <italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub></td>
</tr>
<tr>
<td align="left" valign="top">5:</td>
<td align="left" valign="top"><bold>else</bold></td>
</tr>
<tr>
<td align="left" valign="top">6:</td>
<td align="left" valign="top">&#x000A0;&#x000A0;&#x000A0;<italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub> &#x0003D; <italic>z</italic><sub>0</sub></td>
</tr>
<tr>
<td align="left" valign="top">7:</td>
<td align="left" valign="top"><bold>end if</bold></td>
</tr>
<tr>
<td align="left" valign="top">8:</td>
<td align="left" valign="top"><bold>if</bold> <italic>z</italic><sub><italic>u,dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub> &#x0003E; <italic>T</italic> <bold>then</bold></td>
</tr>
<tr>
<td align="left" valign="top">9:</td>
<td align="left" valign="top">&#x000A0;&#x000A0;&#x000A0;use <italic>R</italic>, &#x003B1; to compute <italic>R</italic><sub><italic>d</italic></sub> by Equation (7) and <bold>FG-MAP</bold> with <italic>k</italic> iterations (heterogenization strategy)</td>
</tr>
<tr>
<td align="left" valign="top">10:</td>
<td align="left" valign="top">&#x000A0;&#x000A0;&#x000A0;<italic>R</italic><sub><italic>out</italic></sub>: &#x0003D; <italic>R</italic><sub><italic>d</italic></sub></td>
</tr>
<tr>
<td align="left" valign="top">11:</td>
<td align="left" valign="top"><bold>else</bold></td>
</tr>
<tr>
<td align="left" valign="top">12:</td>
<td align="left" valign="top">&#x000A0;&#x000A0;&#x000A0;use <italic>R</italic> &#x0002B; <italic>R</italic><sub><italic>x</italic></sub>, &#x003B2; compute <italic>R</italic><sub><italic>s</italic></sub> by Equation (9) and <bold>FG-MAP</bold> with <italic>k</italic> iterations (homogenization strategy)</td>
</tr>
<tr>
<td align="left" valign="top">13:</td>
<td align="left" valign="top">&#x000A0;&#x000A0;&#x000A0;<italic>R</italic><sub><italic>out</italic></sub>: &#x0003D; <italic>R</italic><sub><italic>s</italic></sub></td>
</tr>
<tr>
<td align="left" valign="top">14:</td>
<td align="left" valign="top"><bold>end if</bold></td>
</tr>
<tr>
<td align="left" valign="top">15:</td>
<td align="left" valign="top"><bold>return</bold> <italic>R</italic><sub><italic>out</italic></sub> as the re-ranked complementary recommendations for <italic>u</italic> and <italic>q</italic></td>
</tr>  
</tbody>
</table>
</table-wrap>
 <p>We add <italic>z</italic><sub>0</sub> as a default value for cold departments of query items which are not seen in the history. <italic>z</italic><sub>0</sub> could be initialized by the average of all <italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub>.</p></sec>
</sec>
<sec id="s5">
<title>5. Evaluation</title>
<p>In this section, we evaluate our proposed solution on the publicly available Instacart dataset (Instacart, <xref ref-type="bibr" rid="B6">2017</xref>). We also conduct a parameter analysis of re-ranking performance with different <italic>T</italic>.</p>
<sec>
<title>5.1. Evaluation setting</title>
<p>The Instacart dataset (Instacart, <xref ref-type="bibr" rid="B6">2017</xref>) has 49,677 distinct items, 134 distinct aisles, 21 distinct departments, and 206,209 distinct users. We train the <bold>triple2vec</bold> model on the Instacart training dataset, with an embedding dimension of 100, a batch size of 128, an initial learning rate of 0.05, and a stochastic gradient descent optimizer. We also compute <italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub> for each pair of (<italic>u, dept</italic><sub><italic>i</italic></sub>) for the next-5 purchase (<italic>k</italic> &#x0003D; 5 in Equation 10). When evaluating the re-ranking strategies, we compare the results before and after the re-rank. Given a query item <italic>q</italic>, a user <italic>u</italic>, and recommendations <italic>R, R</italic><sub><italic>x</italic></sub> generated by the <bold>triple2vec</bold> model, we compute the Hit-Rate&#x00040;5 and Normalized Discounted Cumulative Gain (NDCG&#x00040;5) for raw complementary recommendation <italic>R</italic> and the re-ranked complementary recommendations by (1) heterogenization only, (2) homogenization only, and (3) combining heterogenization and homogenization strategies with user intent scores dynamically on the task of next-item prediction. The reason why we focus on the next-5 purchase is because the user intent might last for a short period and we want to study the impact of two different complementary recommendations on the top recommendations. If we consider bigger <italic>k</italic>, it is likely to introduce diversity in recommendations. Here, we define <italic>R</italic> &#x0003D; {<italic>r</italic><sub>1</sub>, <italic>r</italic><sub>2</sub>, <italic>r</italic><sub>3</sub>, <italic>r</italic><sub>4</sub>, <italic>r</italic><sub>5</sub>} and <italic>R</italic><sub><italic>x</italic></sub> &#x0003D; {<italic>r</italic><sub>6</sub>, <italic>r</italic><sub>7</sub>, <italic>r</italic><sub>8</sub>, <italic>r</italic><sub>9</sub>, <italic>t</italic><sub>10</sub>} to cooperate the metrics of Hit-Rate&#x00040;5 and NDCG&#x00040;5.</p>
<p>Since it is a novel study on exploratory vs. non-exploratory user behaviors for complementary item recommendations, it is hard to find proper baselines. We choose three baseline models for comparison. (1) As aforementioned, we only consider the transaction data due to its high accessibility, we consider the raw recommendations from <italic><bold>triple2vec</bold></italic> for pure complementary item recommendations. (2) The second baseline model is the diversified recommendation by DPP for the pure heterogenization strategy. (3) Similarly, we use <italic>T</italic> &#x0003D; 1 to force homogenization and generate our third baseline model for comparisons.</p>
<p>To further understand the trade-off between heterogenization and homogenization strategies, we evaluate the combined strategy with <italic>T</italic> &#x02208; {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}. We use &#x003B1; &#x0003D; &#x003B2; &#x0003D; 0.01 for evaluations.</p>
</sec>
<sec>
<title>5.2. Evaluation results</title>
<p>We evaluate our re-ranking strategies on the Instacart evaluation dataset and the detailed results are shown in <xref ref-type="table" rid="T1">Table 1</xref>. The heterogenization strategy improves the Hit-Rate&#x00040;5 and NDCG&#x00040;5 compared with the raw recommendations, while only using homogenization strategy reduces the performance. Combining both re-ranking strategies together with a proper <italic>T</italic> improves the overall performance. Particularly, <italic>T</italic> &#x0003D; 0.2 achieves the best Hit-Rate&#x00040;5 with and <italic>T</italic> &#x0003D; 0.1 achieves the best NDCG&#x00040;5. This result is reasonable because <italic>T</italic> &#x0003D; 0.2 means, on average, users purchase the next-5 items under the same department. The evaluation results show a better performance for covering users who prefer conventional complementary shopping with the homogenization strategy.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Detailed results of next-item prediction.</p></caption> 
<table frame="box" rules="all">
<thead>
<tr style="background-color:#8f9496;color:#ffffff">
<th/>
<th valign="top" align="center"><bold>Hit-Rate&#x00040;5</bold></th>
<th valign="top" align="center"><bold>NDCG&#x00040;5</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Raw recommendation (triple2vec)</td>
<td valign="top" align="center">0.05581</td>
<td valign="top" align="center">0.03216</td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 0 (heterogenization only by DPP)</td>
<td valign="top" align="center">0.05581</td>
<td valign="top" align="center">0.03379</td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 0.1</td>
<td valign="top" align="center">0.05612</td>
<td valign="top" align="center"><underline>0.03380</underline></td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 0.2</td>
<td valign="top" align="center"><underline>0.05625</underline></td>
<td valign="top" align="center">0.03377</td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 0.3</td>
<td valign="top" align="center">0.05612</td>
<td valign="top" align="center">0.03371</td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 0.4</td>
<td valign="top" align="center">0.05558</td>
<td valign="top" align="center">0.03318</td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 0.5</td>
<td valign="top" align="center">0.05388</td>
<td valign="top" align="center">0.03214</td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 0.6</td>
<td valign="top" align="center">0.05259</td>
<td valign="top" align="center">0.03133</td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 0.7</td>
<td valign="top" align="center">0.05261</td>
<td valign="top" align="center">0.03128</td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 0.8</td>
<td valign="top" align="center">0.05261</td>
<td valign="top" align="center">0.03127</td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 0.9</td>
<td valign="top" align="center">0.05261</td>
<td valign="top" align="center">0.03127</td>
</tr> <tr>
<td valign="top" align="left"><italic>T</italic> = 1 (homogenization only)</td>
<td valign="top" align="center">0.05261</td>
<td valign="top" align="center">0.03127</td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>Hit-Rate&#x00040;5 and NDCG&#x00040;5 are shown. The highest score is highlighted by underline.</p>
</table-wrap-foot>
</table-wrap>
<p>Note that, only applying the homogenization strategy reduces both Hit-Rate&#x00040;5 and NDCG&#x00040;5. It might be because only showing complementary recommendations in a narrow scope is likely to miss users&#x00027; interests (see Section 5.3 for more details). If a user is not interested in the first recommended item, this user will likely not be interested in the following recommendations because they are similar. The heterogenization strategy improves this feature by surfacing different complementary items to the top. Now, the re-ranked recommendations are more likely to hit this user&#x00027;s interests. Combining these two strategies together actually covers the requirements of exploratory and conventional complementary shopping intents from users.</p>
<p>In summary, our results show that combining two strategies dynamically improves the overall performance, compared with only using a single diversification strategy.</p>
</sec>
<sec>
<title>5.3. User intent modeling</title>
<p>To further indicate the necessity of personalized diversification strategies for complementary item recommendations, we visualize the distribution of user intent score <italic>z</italic><sub><italic>u, dep</italic><sub><italic>t</italic></sub><sub><italic>i</italic></sub></sub> by departments. <xref ref-type="fig" rid="F3">Figure 3</xref> summarizes the distribution. We can see that for a given department, the user intent scores distribute differently. For example, the majority of the user intent scores in Deli and Produce departments follow in the range [0.3, 0.6], which indicates that users tend to shop more diverse items when the query items are from these departments. Dairy and Beverage have similar results such as Deli and Produce. However, for departments such as Household and Pantry, the majority of the user intent scores are in the range [0.1, 0.4]. Compared with other departments, the users tend to purchase more homogeneous items when the query items are from Household and Pantry, which is reasonable because these departments usually cover most of the department-related shopping demands and correlate less with other departments. The aforementioned observation addresses the requirement of tuning the diversification of complementary item recommendations with personalization.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Distribution of user intent scores by departments. <bold>(A)</bold> Deli, <bold>(B)</bold> Produce, <bold>(C)</bold> Dair eggs, <bold>(D)</bold> Beverage, <bold>(E)</bold> Meat seafood, <bold>(F)</bold> Canned goods, <bold>(G)</bold> Frozen, <bold>(H)</bold> Bakery, <bold>(I)</bold> Breakfast, <bold>(J)</bold> Pantry, and <bold>(K)</bold> Household.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-974072-g0003.tif"/>
</fig></sec>
</sec>
<sec id="s6">
<title>6. Conclusion and future work</title>
<p>We focus on the re-ranking of complementary recommendations in online grocery and point out the exploratory and conventional complementary shopping intents from users. To fulfill these two user intents, we propose two re-ranking strategies, heterogenization and homogenization, based on DPP on the raw complementary recommendations and dynamically combine two re-rankings as a final solution to improve the performance. We demonstrate the effectiveness of our solution on the publicly available Instacart dataset.</p></sec>
<sec sec-type="data-availability" id="s7">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.</p></sec>
<sec sec-type="author-contributions" id="s8">
<title>Author contributions</title>
<p>NS helped the experiments. JC, SK, and KA help the iteration of the research ideas and the design during this research. All authors contributed to the article and approved the submitted version.</p></sec>
</body>
<back>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>LM, JC, SK, and KA were employed by Walmart Global Tech. NS was employed by DoorDash.</p>
</sec>
<sec sec-type="disclaimer" id="s9">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>Individual diversity refers to the diversity of recommendations for a given user or individual diversity focuses on the problem of how to maximize item novelty in the face of already recommended ones when generating the recommendation list.</p></fn>
<fn id="fn0002"><p><sup>2</sup>Aggregate diversity refers to the diversity of recommendations across all users or aggregate diversity can be viewed as a problem of how to improve the ability of a recommender system to recommend long-tail items.</p></fn>
<fn id="fn0003"><p><sup>3</sup>For more details of the fast greedy MAP inference algorithm, please refer to Chen et al. (<xref ref-type="bibr" rid="B3">2018</xref>).</p></fn>
<fn id="fn0004"><p><sup>4</sup>However, the diversification re-rank aforementioned could result in <italic>R</italic><sub><italic>d</italic></sub> = {<monospace>eggs</monospace>, <monospace>banana</monospace>, <monospace>cheese</monospace>, <monospace>bread</monospace>, <monospace>sausage</monospace>, <monospace>cereal</monospace>, <monospace>margarine</monospace>, <monospace>yogurt</monospace>}.</p></fn>
<fn id="fn0005"><p><sup>5</sup><xref ref-type="table" rid="A1">Algorithm 1</xref> in Chen et al. (<xref ref-type="bibr" rid="B3">2018</xref>).</p></fn>
<fn id="fn0006"><p><sup>6</sup><italic>tax</italic>(&#x000B7;) returns the department of the input item.</p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Adomavicius</surname> <given-names>G.</given-names></name> <name><surname>Kwon</surname> <given-names>Y.</given-names></name></person-group> (<year>2012</year>). <article-title>Improving aggregate recommendation diversity using ranking-based techniques</article-title>. <source>IEEE Trans. Knowl. Data Eng</source>. <volume>24</volume>, <fpage>896</fpage>&#x02013;<lpage>911</lpage>. <pub-id pub-id-type="doi">10.1109/TKDE.2011.15</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Barkan</surname> <given-names>O.</given-names></name> <name><surname>Koenigstein</surname> <given-names>N.</given-names></name></person-group> (<year>2016</year>). <article-title>Item2vec: neural item embedding for collaborative filtering,</article-title> in <source>Proceedings of the Poster Track of the 10th ACM Conference on Recommender Systems (RecSys 2016), Boston, USA, September 17, 2016, volume 1688 of CEUR Workshop Proceedings</source> (<publisher-loc>Boston, MA</publisher-loc>: <publisher-name>CEUR-WS.org</publisher-name>).</citation></ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>G.</given-names></name> <name><surname>Zhou</surname> <given-names>E.</given-names></name></person-group> (<year>2018</year>). <article-title>Fast greedy MAP inference for determinantal point process to improve recommendation diversity,</article-title> in <source>Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018</source> (<publisher-loc>Montreal, QC</publisher-loc>), <fpage>5627</fpage>&#x02013;<lpage>5638</lpage>.<pub-id pub-id-type="pmid">31955680</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>D&#x000ED;ez</surname> <given-names>J.</given-names></name> <name><surname>Mart&#x000ED;nez-Rego</surname> <given-names>D.</given-names></name> <name><surname>Alonso-Betanzos</surname> <given-names>A.</given-names></name> <name><surname>Luaces</surname> <given-names>O.</given-names></name> <name><surname>Bahamonde</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Optimizing novelty and diversity in recommendations</article-title>. <source>Prog. Artif. Intell</source>. <volume>8</volume>, <fpage>101</fpage>&#x02013;<lpage>109</lpage>. <pub-id pub-id-type="doi">10.1007/s13748-018-0158-4</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hao</surname> <given-names>J.</given-names></name> <name><surname>Zhao</surname> <given-names>T.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Dong</surname> <given-names>X. L.</given-names></name> <name><surname>Faloutsos</surname> <given-names>C.</given-names></name> <name><surname>Sun</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>P-companion: a principled framework for diversified complementary product recommendation,</article-title> in <source>CIKM &#x00027;20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020</source>, eds <person-group person-group-type="editor"><name><surname>d&#x00027;Aquin</surname> <given-names>M.</given-names></name> <name><surname>Dietze</surname> <given-names>S.</given-names></name> <name><surname>Hauff</surname> <given-names>C.</given-names></name> <name><surname>Curry</surname> <given-names>E.</given-names></name> <name><surname>Cudr&#x000E9;-Mauroux</surname> <given-names>P.</given-names></name></person-group> (<publisher-loc>Ireland</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>2517</fpage>&#x02013;<lpage>2524</lpage>.</citation></ref>
<ref id="B6">
<citation citation-type="web"><person-group person-group-type="author"><collab>Instacart</collab></person-group> (<year>2017</year>). <source>Instacart Market Basket Analysis</source>. Available oline at: <ext-link ext-link-type="uri" xlink:href="https://www.kaggle.com/c/instacart-market-basket-analysis/">https://www.kaggle.com/c/instacart-market-basket-analysis/</ext-link></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kulesza</surname> <given-names>A.</given-names></name> <name><surname>Taskar</surname> <given-names>B.</given-names></name></person-group> (<year>2012</year>). <article-title>Determinantal point processes for machine learning</article-title>. <source>CoRR</source>, abs/1207.6083. <pub-id pub-id-type="doi">10.1561/9781601986290</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>J.</given-names></name> <name><surname>Shi</surname> <given-names>K.</given-names></name> <name><surname>Guo</surname> <given-names>Q.</given-names></name></person-group> (<year>2012</year>). <article-title>Solving the accuracy-diversity dilemma via directed random walks</article-title>. <source>CoRR</source>, abs/1201.6278. <pub-id pub-id-type="doi">10.1103/PhysRevE.85.016118</pub-id><pub-id pub-id-type="pmid">22400636</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Gu</surname> <given-names>Y.</given-names></name> <name><surname>Ding</surname> <given-names>Z.</given-names></name> <name><surname>Gao</surname> <given-names>J.</given-names></name> <name><surname>Guo</surname> <given-names>Z.</given-names></name> <name><surname>Bao</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Decoupled graph convolution network for inferring substitutable and complementary items,</article-title> in <source>CIKM &#x00027;20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020</source>, eds <person-group person-group-type="editor"><name><surname>d&#x00027;Aquin</surname> <given-names>M.</given-names></name> <name><surname>Dietze</surname> <given-names>S.</given-names></name> <name><surname>Hauff</surname> <given-names>C.</given-names></name> <name><surname>Curry</surname> <given-names>E.</given-names></name> <name><surname>Cudr&#x000E9;-Mauroux</surname> <given-names>P.</given-names></name></person-group> (<publisher-loc>Ireland</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>2621</fpage>&#x02013;<lpage>2628</lpage>.</citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Wu</surname> <given-names>Q.</given-names></name> <name><surname>Miao</surname> <given-names>C.</given-names></name> <name><surname>Cui</surname> <given-names>L.</given-names></name> <name><surname>Zhao</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Diversity-promoting deep reinforcement learning for interactive recommendation</article-title>. <source>CoRR</source>, abs/1903.07826. <pub-id pub-id-type="doi">10.48550/arXiv.1903.07826</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ma</surname> <given-names>L.</given-names></name> <name><surname>Xu</surname> <given-names>J.</given-names></name> <name><surname>Cho</surname> <given-names>J. H. D.</given-names></name> <name><surname>K&#x000F6;rpeoglu</surname> <given-names>E.</given-names></name> <name><surname>Kumar</surname> <given-names>S.</given-names></name> <name><surname>Achan</surname> <given-names>K.</given-names></name></person-group> (<year>2021</year>). <article-title>NEAT: a label noise-resistant complementary item recommender system with trustworthy evaluation,</article-title> in <source>2021 IEEE International Conference on Big Data (Big Data)</source>, Y. Chen, H. Ludwig, Y. Tu, U. M. Fayyad, X. Zhu, X. Hu, S. Byna, X. Liu, J. Zhang, S. Pan, V. Papalexakis, J. Wang, A. Cuzzocrea, and C. Ordonez (<publisher-loc>Orlando, FL</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>469</fpage>&#x02013;<lpage>479</lpage>.</citation></ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>McAuley</surname> <given-names>J. J.</given-names></name> <name><surname>Pandey</surname> <given-names>R.</given-names></name> <name><surname>Leskovec</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>Inferring networks of substitutable and complementary products,</article-title> in <source>Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source> (<publisher-loc>Sydney, NSW</publisher-loc>: <publisher-name>ACM.</publisher-name>), <fpage>785</fpage>&#x02013;<lpage>794</lpage>.</citation></ref>
<ref id="B13">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Qin</surname> <given-names>L.</given-names></name> <name><surname>Zhu</surname> <given-names>X.</given-names></name></person-group> (<year>2013</year>). <article-title>Promoting diversity in recommendation by entropy regularizer,</article-title> in <source>IJCAI 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence</source> (<publisher-loc>Beijing</publisher-loc>: <publisher-name>IJCAI/AAAI</publisher-name>), <fpage>2698</fpage>&#x02013;<lpage>2704</lpage>.</citation></ref>
<ref id="B14">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wan</surname> <given-names>M.</given-names></name> <name><surname>Wang</surname> <given-names>D.</given-names></name> <name><surname>Liu</surname> <given-names>J.</given-names></name> <name><surname>Bennett</surname> <given-names>P.</given-names></name> <name><surname>McAuley</surname> <given-names>J. J.</given-names></name></person-group> (<year>2018</year>). <article-title>Representing and recommending shopping baskets with complementarity, compatibility and loyalty,</article-title> in <source>Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018</source> (<publisher-loc>Torino</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>1133</fpage>&#x02013;<lpage>1142</lpage>.</citation></ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Jiang</surname> <given-names>Z.</given-names></name> <name><surname>Ren</surname> <given-names>Z.</given-names></name> <name><surname>Tang</surname> <given-names>J.</given-names></name> <name><surname>Yin</surname> <given-names>D.</given-names></name></person-group> (<year>2018</year>). <article-title>A path-constrained framework for discriminating substitutable and complementary products in e-commerce,</article-title> in <source>Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM 2018</source>, eds <person-group person-group-type="editor"><name><surname>Chang</surname> <given-names>Y.</given-names></name> <name><surname>Zhai</surname> <given-names>C.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Maarek</surname> <given-names>Y.</given-names></name></person-group> (<publisher-loc>Marina Del Rey, CA</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>619</fpage>&#x02013;<lpage>627</lpage>.</citation></ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wilhelm</surname> <given-names>M.</given-names></name> <name><surname>Ramanathan</surname> <given-names>A.</given-names></name> <name><surname>Bonomo</surname> <given-names>A.</given-names></name> <name><surname>Jain</surname> <given-names>S.</given-names></name> <name><surname>Chi</surname> <given-names>E. H.</given-names></name> <name><surname>Gillenwater</surname> <given-names>J.</given-names></name></person-group> (<year>2018</year>). <article-title>Practical diversified recommendations on youtube with determinantal point processes,</article-title> in <source>Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018</source> (<publisher-loc>Torino</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>2165</fpage>&#x02013;<lpage>2173</lpage>.</citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>Q.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Miao</surname> <given-names>C.</given-names></name> <name><surname>Zhao</surname> <given-names>Y.</given-names></name> <name><surname>Guan</surname> <given-names>L.</given-names></name> <name><surname>Tang</surname> <given-names>H.</given-names></name></person-group> (<year>2019</year>). <article-title>Recent advances in diversified recommendation</article-title>. <source>CoRR</source>, abs/1905.06589. <pub-id pub-id-type="doi">10.48550/arXiv.1905.06589</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>D.</given-names></name> <name><surname>Ruan</surname> <given-names>C.</given-names></name> <name><surname>K&#x000F6;rpeoglu</surname> <given-names>E.</given-names></name> <name><surname>Kumar</surname> <given-names>S.</given-names></name> <name><surname>Achan</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>Product knowledge graph embedding for e-commerce</article-title>. <source>CoRR</source>, abs/1911.12481. <pub-id pub-id-type="doi">10.1145/3336191.3371778</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Lu</surname> <given-names>H.</given-names></name> <name><surname>Niu</surname> <given-names>W.</given-names></name> <name><surname>Caverlee</surname> <given-names>J.</given-names></name></person-group> (<year>2018</year>). <article-title>Quality-aware neural complementary item recommendation,</article-title> in <source>Proceedings of the 12th ACM Conference on Recommender Systems, RecSys 2018</source>, eds <person-group person-group-type="editor"><name><surname>Pera</surname> <given-names>S.</given-names></name> <name><surname>Ekstrand</surname> <given-names>M. D.</given-names></name> <name><surname>Amatriain</surname> <given-names>X.</given-names></name> <name><surname>O&#x00027;Donovan</surname> <given-names>J.</given-names></name></person-group> (<publisher-loc>Vancouver, BC</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>77</fpage>&#x02013;<lpage>85</lpage>.</citation></ref>
<ref id="B20">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ziegler</surname> <given-names>C.</given-names></name> <name><surname>McNee</surname> <given-names>S. M.</given-names></name> <name><surname>Konstan</surname> <given-names>J. A.</given-names></name> <name><surname>Lausen</surname> <given-names>G.</given-names></name></person-group> (<year>2005</year>). <article-title>mproving recommendation lists through topic diversification,</article-title> in <source>Proceedings of the 14th International Conference on World Wide Web, WWW 2005</source> (<publisher-loc>Chiba</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>22</fpage>&#x02013;<lpage>32</lpage>.</citation></ref>
</ref-list>
</back>
</article>