<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="review-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Phys.</journal-id>
<journal-title>Frontiers in Physics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Phys.</abbrev-journal-title>
<issn pub-type="epub">2296-424X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">766540</article-id>
<article-id pub-id-type="doi">10.3389/fphy.2021.766540</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Physics</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Adversarial Machine Learning on Social Network: A Survey</article-title>
<alt-title alt-title-type="left-running-head">Guo et&#x20;al.</alt-title>
<alt-title alt-title-type="right-running-head">Adversarial Example for Social Network</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Guo</surname>
<given-names>Sensen</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1453419/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Li</surname>
<given-names>Xiaoyu</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1462834/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Mu</surname>
<given-names>Zhiying</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1517320/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<label>
<sup>1</sup>
</label>Research &#x26; Development Institute of Northwestern Polytechnical University in Shenzhen, <addr-line>Shenzhen</addr-line>, <country>China</country>
</aff>
<aff id="aff2">
<label>
<sup>2</sup>
</label>School of Cybersecurity, Northwestern Polytechnical University, <addr-line>Xi&#x2019;an</addr-line>, <country>China</country>
</aff>
<author-notes>
<corresp id="c001">&#x2a;Correspondence: Xiaoyu Li, <email>lixiaoyu@nwpu.edu.cn</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Social Physics, a section of the journal Frontiers in Physics</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1271072/overview">Shudong Li</ext-link>, Guangzhou University, China</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1469795/overview">Xinghua Li</ext-link>, Xidian University, China</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/825631/overview">Gui-Quan Sun</ext-link>, North University of China, China</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>29</day>
<month>11</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>9</volume>
<elocation-id>766540</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>08</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>10</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2021 Guo, Li and Mu.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Guo, Li and Mu</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<abstract>
<p>In recent years, machine learning technology has made great improvements in social networks applications such as social network recommendation systems, sentiment analysis, and text generation. However, it cannot be ignored that machine learning algorithms are vulnerable to adversarial examples, that is, adding perturbations that are imperceptible to the human eye to the original data can cause machine learning algorithms to make wrong outputs with high probability. This also restricts the widespread use of machine learning algorithms in real life. In this paper, we focus on adversarial machine learning algorithms on social networks in recent years from three aspects: sentiment analysis, recommendation system, and spam detection, We review some typical applications of machine learning algorithms and adversarial example generation and defense algorithms for machine learning algorithms in the above three aspects in recent years. besides, we also analyze the current research progress and prospects for the directions of future research.</p>
</abstract>
<kwd-group>
<kwd>social networks</kwd>
<kwd>adversarial examples</kwd>
<kwd>sentiment analysis</kwd>
<kwd>recommendation system</kwd>
<kwd>spam detection</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>In recent years, with the rapid development of internet technology, social networks have played an increasingly important role in people&#x2019;s lives [<xref ref-type="bibr" rid="B1">1</xref>]. Among them, social networks such as Facebook, Twitter, and Instagram have shortened the distance between people and changed the way that people get information. For example, more and more people are willing to share new things happening around them with friends through social networks, and government agencies release the latest policy information to the public through social networks. With the rapid popularization of social networks, the role of social networks is not limited to providing people with a channel to communicate with friends. For example, users can be profiled according to its timelines, then the system can recommend friends, topics, information, and products that users may be interested in, which can greatly enrich people&#x2019;s leisure life. Filtering useless spam and robot accounts can not only reduce the time that users spend on browsing spam but also protect users from phishing website attacks. Besides, research on social network information dissemination [<xref ref-type="bibr" rid="B2">2</xref>, <xref ref-type="bibr" rid="B3">3</xref>] can not only facilitate social network marketing but also effectively predict and control public opinion. The study of the interaction between disease and disease information on complex networks [<xref ref-type="bibr" rid="B4">4</xref>] has played an important role in understanding the dynamics of epidemic transmission and the interaction between information dissemination. Therefore, how to use social networks to achieve various functions has become a research hotspot in recent&#x20;years.</p>
<p>With significant improvement in the performance of computers and the widespread application of GPUs, Machine learning (ML) especially Deep Learning (DL) has been widely used in various industries (such as automatic driving, computer vision, machine translation, recommendation systems, cybersecurity, etc.). In terms of social networks, many scholars also use machine learning algorithms to implement functions such as friend or information recommendation, user interest analysis, and spam detection. However, it can&#x2019;t be ignored that machine learning algorithms are vulnerable to adversarial examples, that is, adding perturbations that are not perceptible to the human eye can mislead the classifier to output a completely different classification result. After the concept of adversarial examples was proposed, many studies have shown that no matter how the machine learning model is adjusted, it can always be successfully broken by new adversarial example generation methods. In recent years, the research on the generation and defense of adversarial examples has spread from the field of computer vision [<xref ref-type="bibr" rid="B5">5</xref>] to social networks, cybersecurity [<xref ref-type="bibr" rid="B6">6</xref>], natural language processing [<xref ref-type="bibr" rid="B5">5</xref>, <xref ref-type="bibr" rid="B7">7</xref>], audio and video processing [<xref ref-type="bibr" rid="B8">8</xref>], graph data processing [<xref ref-type="bibr" rid="B5">5</xref>], etc. Therefore, the ability to effectively defend against adversarial examples has become a key factor of whether machine learning algorithms can be applied on a large&#x20;scale.</p>
<p>In this paper, we focus on adversarial machine learning in the field of social networks, that is, adversarial example generation and defense technology in the field of social networks. Firstly, we reviewed the recent research progress of machine learning algorithms in social networks in terms of sentiment analysis, recommendation systems, and spam detection, and then we summarized the latest research on adversarial example generation and defense algorithms in recent years. Next we sorted out some research progress of adversarial example generation and defense algorithms in social networks. Finally, we summarized the advantages and disadvantages of existing algorithms, and prospects for its future research directions.</p>
<p>The rest of this paper is organized as follows. In <xref ref-type="sec" rid="s2">section 2</xref>, the application of machine learning algorithms in social networks in recent years is reviewed. <xref ref-type="sec" rid="s3">Section 3</xref> reviews the security issues faced by machine learning algorithms and the robust reinforcement strategies against different attacks. <xref ref-type="sec" rid="s4">Section 4</xref> summarizes the attack and defense algorithms for machine learning in social networks. <xref ref-type="sec" rid="s5">Section 5</xref> analyzes the problems of adversarial example generation and defense algorithms in the field of social networks, prospect the future research direction, and concludes this&#x20;paper.</p>
</sec>
<sec id="s2">
<title>2 Machine Learning in Social Networks</title>
<p>While social network such as Twitter, Facebook, and Instagram facilitate people&#x2019;s communication, they also change people&#x2019;s lifestyles to a great extent. The application of machine learning in social networks also promotes the vigorous development of social networks to a large extent. The main applications of machine learning in social networks are as follows: sentiment analysis, recommendation system, spam detection, community detection [<xref ref-type="bibr" rid="B9">9</xref>], network immunization [<xref ref-type="bibr" rid="B10">10</xref>], user behavior analysis [<xref ref-type="bibr" rid="B11">11</xref>, <xref ref-type="bibr" rid="B12">12</xref>], and other aspects. In this paper, we mainly review the application of machine learning in social networks from three aspects: sentiment analysis, recommendation system, and spam detection.</p>
<sec id="s2-1">
<title>2.1 Sentiment Analysis</title>
<p>Millions of users have posted various opinions on social networks every day, involving daily life, news, entertainment, sports, and other aspects. The emotional of user&#x2019;s comment on different topics can be divided into positive, neutral, and negative categories. With the user&#x2019;s emotional tendency on different topics, we can learn the user&#x2019;s personality, value tendency, and other information. And then more targeted strategies can be used for specific users in activities such as topic dissemination and product promotion. Some researches of machine learning in sentiment analysis are shown in <xref ref-type="table" rid="T1">Table&#x20;1</xref>.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Maching learning in sentiment analysis.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Authors</th>
<th align="center">Introduced methods</th>
<th align="center">Year</th>
<th align="center">Datasets</th>
<th align="center">Baseline</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Al-Smadi et&#x20;al. [<xref ref-type="bibr" rid="B13">13</xref>]</td>
<td align="left">SVM and Deep RNN</td>
<td align="center">2018</td>
<td align="left">Arabic Hotels&#x2019; reviews</td>
<td align="left">&#x2014;</td>
</tr>
<tr>
<td align="left">Hitesh et&#x20;al. [<xref ref-type="bibr" rid="B14">14</xref>]</td>
<td align="left">Word2Vec &#x26; Random forest</td>
<td align="center">2019</td>
<td align="left">Twitter</td>
<td align="left">BOW, TF-IDF</td>
</tr>
<tr>
<td align="left">Long et&#x20;al. [<xref ref-type="bibr" rid="B15">15</xref>]</td>
<td align="left">BiLSTM-MHAT</td>
<td align="center">2019</td>
<td align="left">Taobao</td>
<td align="left">CNN, BiLSTM, Attention-BiLSTM</td>
</tr>
<tr>
<td align="left">Djaballah et&#x20;al. [<xref ref-type="bibr" rid="B16">16</xref>]</td>
<td align="left">SVM, Random Forest</td>
<td align="center">2019</td>
<td align="left">Twitter</td>
<td align="left">Word2vec [<xref ref-type="bibr" rid="B17">17</xref>]</td>
</tr>
<tr>
<td align="left">Ho et&#x20;al. [<xref ref-type="bibr" rid="B18">18</xref>]</td>
<td align="left">Combinatorial model</td>
<td align="center">2019</td>
<td align="left">Kaggle</td>
<td align="left">LR, NB, RF, SVM, MLP</td>
</tr>
<tr>
<td align="left">Liu et&#x20;al. [<xref ref-type="bibr" rid="B19">19</xref>]</td>
<td align="left">AS-Reasoner</td>
<td align="center">2019</td>
<td align="left">SemEval-2014, SemEval-2015</td>
<td align="left">LSTM, TD-LSTM, TD-LSTM.etc</td>
</tr>
<tr>
<td align="left">Yao et&#x20;al. [<xref ref-type="bibr" rid="B20">20</xref>]</td>
<td align="left">DSSA-H</td>
<td align="center">2020</td>
<td align="left">Twitter</td>
<td align="left">SVM, RF</td>
</tr>
<tr>
<td align="left">Umer et&#x20;al. [<xref ref-type="bibr" rid="B21">21</xref>]</td>
<td align="left">CNN-LSTM</td>
<td align="center">2021</td>
<td align="left">Twitter</td>
<td align="left">CNN [<xref ref-type="bibr" rid="B22">22</xref>], LSTM [<xref ref-type="bibr" rid="B23">23</xref>]</td>
</tr>
<tr>
<td align="left">Lv et&#x20;al. [<xref ref-type="bibr" rid="B24">24</xref>]</td>
<td align="left">CAMN</td>
<td align="center">2021</td>
<td align="left">SemEval-2014, Twitter</td>
<td align="left">CEA, DAuM, TNet-AS,etc</td>
</tr>
<tr>
<td align="left">Rawat et&#x20;al. [<xref ref-type="bibr" rid="B25">25</xref>]</td>
<td align="left">SMODT</td>
<td align="center">2021</td>
<td align="left">Twitter</td>
<td align="left">KNN, SVM, DT, SMO</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Wang et&#x20;al. [<xref ref-type="bibr" rid="B26">26</xref>] introduced a multi-head attention-based LSTM model to perform aspect-level sentiment analysis, they carry out their experiment on the dataset of SemEval 2014 Task 4 [<xref ref-type="bibr" rid="B27">27</xref>], the results of the experiment show that their model is advantageously competitive in aspect-level classification. Based on this, Long et&#x20;al. [<xref ref-type="bibr" rid="B15">15</xref>] introduced an improved method with bidirectional LSTM network and multi-head attention mechanism, they utilize the multi-head attention to learn the relevant information from a different representation subspace, and achieved 92.11% accuracy on comment dataset from Taobao.</p>
<p>To perform aspect-based sentiment analysis of Arabic Hotels&#x2019; reviews, both SVM and deep RNN were used in Al-Smadi et&#x20;al. [<xref ref-type="bibr" rid="B13">13</xref>]&#x2019;s works, respectively. They evaluated their method on Arabic Hotels&#x2019; reviews dataset. The results show that the performance of SVM is superior to the other deep RNN approach in the aspect category identification, opinion target expression extraction, and the sentiment polarity identification, but inferior to RNN approaches in the execution time required for training and testing.</p>
<p>By using the API provided by Twitter, Hitesh et&#x20;al. [<xref ref-type="bibr" rid="B14">14</xref>] collected 18,000 tweets without retweets on the term Indian elections. Based on these data, they proposed a model that combined with word2vec and random forest model to perform sentiment analysis, and they used a Word2Vec feature selection model to extract features and then train a random forest model for sentiment analysis, and their final accuracy reaches&#x20;86.8%.</p>
<p>Djaballah et&#x20;al. [<xref ref-type="bibr" rid="B16">16</xref>] proposed a method to detect content that incites terrorism on Twitter, they collected tweets related to terrorism in Arabic and manually classified these tweets in &#x201c;tweets not inciting terrorism&#x201d; and &#x201c;tweets inciting terrorism&#x201d;. Based on Google&#x2019;s Word2vec method [<xref ref-type="bibr" rid="B17">17</xref>], they introduce a method of Word2vec by the weighted average to generate tweets feature vectors, then SVM and Random Forest classifiers were used for the prediction of sentiments. The experiments results show that their method can improve the prediction results of the Word2vec method [<xref ref-type="bibr" rid="B17">17</xref>] slightly.</p>
<p>Ho et&#x20;al. [<xref ref-type="bibr" rid="B18">18</xref>] proposed a two-stage combinatorial model to perform sentiment analysis. In the first stage, they trained five machine learning algorithms: logistic regression, naive Bayes, multilayer perceptron, support vector machine and random forest with the same dataset. In the second stage, a combinatorial fusion is used to combine a subset of these five algorithms, and experiment results show that the combination of these algorithms can achieve better performance.</p>
<p>To capture precise sentiment expressions in aspect-based sentiment analysis for reasoning, Liu et&#x20;al. [<xref ref-type="bibr" rid="B19">19</xref>] introduced a method named Attention-based Sentiment Reasoner (AS-Reasoner). In their model, an intra attention and a global attention mechanism was designed, respectively. The intra attention computes weights by capturing the sentiment similarity between any two words in a sentence, and the global attention computes weights by a global perspective. They carried out an experiment on various datasets, and the results show that the AS-Reasone is language-independent, and it also achieves state-of-the-art macro-F1 and accuracy for aspect-based sentiment analysis.</p>
<p>Umer et&#x20;al. [<xref ref-type="bibr" rid="B21">21</xref>] proposed a deep learning model which is combined with CNN and LSTM network to perform sentiment analysis on Twitter. The CNN layer is used to learn the higher-level representation of sequences from original data and feed it to the LSTM layers. They carry out their experiment on three Twitter dataset which includes a women&#x2019;s e-commerce dataset, an airline sentiment dataset, and a hate speech dataset, and the accuracy on three datasets is 78.1, 82.0, and 92.0%, respectively, which is markedly superior to singly use of CNN [<xref ref-type="bibr" rid="B22">22</xref>] and LSTM&#x20;[<xref ref-type="bibr" rid="B23">23</xref>].</p>
</sec>
<sec id="s2-2">
<title>2.2 Recommendation System</title>
<p>The social network recommendation system is an important part of the social network system. Recommendation systems such as friend recommendation, content recommendation, and advertising delivery greatly enrich people&#x2019;s social life while also create huge economic benefits. Recommending friends and article content that users may be interested in will extend the time users surf the social networks; Pushing advertising information to users reasonably and effectively can not only creating significant economic benefits but also facilitate users&#x2019; lives. As shown in <xref ref-type="table" rid="T2">Table&#x20;2</xref>, with the rapid development of machine learning, many scholars have also carried out research on social network recommendation system based on machine learning.</p>
<table-wrap id="T2" position="float">
<label>TABLE 2</label>
<caption>
<p>Maching learning in recommendation system.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Authors</th>
<th align="center">Introduced methods</th>
<th align="center">Year</th>
<th align="center">Datasets</th>
<th align="center">Baseline</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Fan et&#x20;al. [<xref ref-type="bibr" rid="B28">28</xref>]</td>
<td align="left">GraphRec</td>
<td align="center">2019</td>
<td align="left">Epinions, Ciao</td>
<td align="left">GC-MC [<xref ref-type="bibr" rid="B29">29</xref>], DeepSoR [<xref ref-type="bibr" rid="B30">30</xref>], NeuMF [<xref ref-type="bibr" rid="B31">31</xref>]</td>
</tr>
<tr>
<td align="left">Gui et&#x20;al. [<xref ref-type="bibr" rid="B32">32</xref>]</td>
<td align="left">Cooperative Multi-Agent Approach</td>
<td align="center">2019</td>
<td align="left">Dataset Containing 50 Historical Tweets Per User</td>
<td align="left">LSTM, Attention methods, Independent Q-Learning, Random sampling</td>
</tr>
<tr>
<td align="left">Guo et&#x20;al. [<xref ref-type="bibr" rid="B33">33</xref>]</td>
<td align="left">GNN-SoR</td>
<td align="center">2020</td>
<td align="left">pinions [<xref ref-type="bibr" rid="B34">34</xref>], Yelp [<xref ref-type="bibr" rid="B35">35</xref>], Flixster [<xref ref-type="bibr" rid="B36">36</xref>]</td>
<td align="left">SocialMF [<xref ref-type="bibr" rid="B37">37</xref>], TrustSVD [<xref ref-type="bibr" rid="B38">38</xref>], TrustMF [<xref ref-type="bibr" rid="B39">39</xref>], AutoRec [<xref ref-type="bibr" rid="B40">40</xref>]</td>
</tr>
<tr>
<td align="left">Huang et&#x20;al. [<xref ref-type="bibr" rid="B41">41</xref>]</td>
<td align="left">MAGRM</td>
<td align="center">2020</td>
<td align="left">Meetup, MovieLens-1M</td>
<td align="left">DPMF-CNN [<xref ref-type="bibr" rid="B42">42</xref>], AGR [<xref ref-type="bibr" rid="B43">43</xref>], AGREE [<xref ref-type="bibr" rid="B44">44</xref>]</td>
</tr>
<tr>
<td align="left">Pan et&#x20;al. [<xref ref-type="bibr" rid="B45">45</xref>]</td>
<td align="left">CoDAE</td>
<td align="center">2020</td>
<td align="left">Epinions, Ciao</td>
<td align="left">CDAE [<xref ref-type="bibr" rid="B46">46</xref>], TDAE [<xref ref-type="bibr" rid="B47">47</xref>]</td>
</tr>
<tr>
<td align="left">Zheng et&#x20;al. [<xref ref-type="bibr" rid="B48">48</xref>]</td>
<td align="left">ITRA</td>
<td align="center">2021</td>
<td align="left">Delicious [<xref ref-type="bibr" rid="B49">49</xref>], FilmTrust [<xref ref-type="bibr" rid="B50">50</xref>], CiaoDVD [<xref ref-type="bibr" rid="B51">51</xref>]</td>
<td align="left">CDAE [<xref ref-type="bibr" rid="B46">46</xref>], SAMN [<xref ref-type="bibr" rid="B52">52</xref>], CAVE [<xref ref-type="bibr" rid="B53">53</xref>]</td>
</tr>
<tr>
<td align="left">Ni et&#x20;al. [<xref ref-type="bibr" rid="B54">54</xref>]</td>
<td align="left">RM-DRL</td>
<td align="center">2021</td>
<td align="left">Netflix, BookCrossing, Movielens-20M, Movielens-1M, HetRec 2011-Movielens</td>
<td align="left">ConvMF [<xref ref-type="bibr" rid="B55">55</xref>], DRMF [<xref ref-type="bibr" rid="B56">56</xref>], GNN [<xref ref-type="bibr" rid="B57">57</xref>], AFM [<xref ref-type="bibr" rid="B58">58</xref>], RACMF [<xref ref-type="bibr" rid="B59">59</xref>], HRAM [<xref ref-type="bibr" rid="B60">60</xref>], DAINN [<xref ref-type="bibr" rid="B61">61</xref>]</td>
</tr>
<tr>
<td align="left">Tahmasebi et&#x20;al. [<xref ref-type="bibr" rid="B62">62</xref>]</td>
<td align="left">SRDNet</td>
<td align="center">2021</td>
<td align="left">MovieTweetings, Open Movie Database</td>
<td align="left">AutoRec [<xref ref-type="bibr" rid="B40">40</xref>], MRS-RBM [<xref ref-type="bibr" rid="B63">63</xref>], PP-CF [<xref ref-type="bibr" rid="B64">64</xref>], et</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Fan et&#x20;al. [<xref ref-type="bibr" rid="B28">28</xref>] try to perform social recommendation with graph neural networks, and they introduced a model named GraphRec (Graph Neural Network Framework), which is composed of the user modeling, the item modeling, and the rating prediction. Both the user modeling and the item modeling used graph neural network and attention network to learn user latent factors (<bold>h</bold>
<sub>
<bold>i</bold>
</sub>) and the learn item latent factors (<bold>z</bold>
<sub>
<bold>j</bold>
</sub>) from the original data, respectively, the rating prediction concatenate the user latent factors and the item latent factors and feed into a multilayer perceptron neural network for rating prediction. They evaluated the GraphRec with two representative datasets Epinions and Ciao, and the results show that the GraphRec can outperform GC-MC (Graph Convolutional Matrix Completion) [<xref ref-type="bibr" rid="B29">29</xref>], DeepSoR (Deep Neural Network Model on Social Relations for Recommendation) [<xref ref-type="bibr" rid="B30">30</xref>], NeuMF (Neural Matrix Factorization) [<xref ref-type="bibr" rid="B31">31</xref>], and some other baseline algorithms.</p>
<p>Guo et&#x20;al. [<xref ref-type="bibr" rid="B33">33</xref>] hold that the feature space of social recommendation is composed of user features and item feature, the user feature is composed of inherent preference and social influence, and the item feature include attribute contents, attribute correlations, and attribute concatenation. They introduced a framework named GNN-SoR (Graph Neural Network-based Social Recommendation Framework) to exploit the correlations of item attributes. In their framework, two graphs neural network methods are used to encode the user feature space and the item feature space, respectively. Then, the encoded two spaces are regarded as two potential factors in the matrix factorization process to predict the unknown preference ratings. They conducted experiments on real-world datasets Epinions [<xref ref-type="bibr" rid="B34">34</xref>], Yelp [<xref ref-type="bibr" rid="B35">35</xref>] and Flixster [<xref ref-type="bibr" rid="B36">36</xref>] respectively, and the experimental results indicated that the perform of GNN-SoR is superior to four baselines algorithm such as: SocialMF (Matrix Factorization based Social Recommendation Networks) [<xref ref-type="bibr" rid="B37">37</xref>], TrustSVD [<xref ref-type="bibr" rid="B38">38</xref>], TrustMF [<xref ref-type="bibr" rid="B39">39</xref>], and AutoRec&#x20;[<xref ref-type="bibr" rid="B40">40</xref>].</p>
<p>Huang et&#x20;al. [<xref ref-type="bibr" rid="B41">41</xref>] introduced a model named MAGRM (Multiattention-based Group Recommendation Model) to perform group recommendation, and the MAGRM is consists of two multiattention based model: the VR-GF (vector representation for group features) and the PL-GI (preference learning for groups on items). The VR-GF is used for getting the&#x20;deep semantic feature for each group. Based on VR-GF, the PL-GI is used for predicting groups&#x2019; ratings on items, the experiment with two real-world dataset Meetup and MovieLens-1M, and the performance of MAGRM outperforms AGR [<xref ref-type="bibr" rid="B43">43</xref>], AGREE (Attentive Group Recommendation) [<xref ref-type="bibr" rid="B44">44</xref>] and other algorithms.</p>
<p>Pan et&#x20;al. [<xref ref-type="bibr" rid="B45">45</xref>] introduced a model named CoDAE (Correlative Denoising Autoencoder) to perform top-k recommendation task, which learn user features by modeling user with truster, roles of rater, and trustee with three separate denoising autoencoder model. They carried out an experiment on Ciao and Epinions datasets, they found that their method is superior to CDAE (Collaborative Denoising Auto-Encoders) [<xref ref-type="bibr" rid="B46">46</xref>], TDAE [<xref ref-type="bibr" rid="B47">47</xref>], and some other baseline algorithms. Similar to [<xref ref-type="bibr" rid="B45">45</xref>], Zheng et&#x20;al. [<xref ref-type="bibr" rid="B48">48</xref>]. proposed a model named ITRA (Implicit Trust Relation-Aware model) which is based on Variational Auto-Encoder to learn the hidden relationship between huge amounts of graph data. They evaluated their model on three dataset: Delicious [<xref ref-type="bibr" rid="B49">49</xref>], FilmTrust [<xref ref-type="bibr" rid="B50">50</xref>], and CiaoDVD [<xref ref-type="bibr" rid="B51">51</xref>], where the performance of ITRA was markedly superior to SAMN (Social Attention Memory Networ) [<xref ref-type="bibr" rid="B52">52</xref>], CVAE [<xref ref-type="bibr" rid="B53">53</xref>], and CDAE [<xref ref-type="bibr" rid="B46">46</xref>] in the top-n item recommendation task.</p>
<p>By capturing the semantic features of users and items effectively, Ni et&#x20;al. [<xref ref-type="bibr" rid="B54">54</xref>] proposed a model named RM-DRL (Recommendation Model based on Deep Representation Learning). According to the authors, firstly, they used a CNN network to learn the semantic feature vector of the item from its&#x20;primitive feature vectors. Next, they used an Attention-Integrated Gated Recurrent Unit to learn user semantic feature vector from a series of user features such as the user preference history, semantic feature vectors, primitive feature vector and so on. Finally, the users&#x2019; preferences on the items were calculated with the semantic feature vectors of the items and the users. They&#x20;conduct their experiments on five datasets, and the results&#x20;show that the performance of RM-DRL is superior to ConvMF [<xref ref-type="bibr" rid="B55">55</xref>], AFM (Attentional Factorization Machines) [<xref ref-type="bibr" rid="B58">58</xref>], GNN [<xref ref-type="bibr" rid="B57">57</xref>], HRAM (Hybrid Recurrent Attention Machine) [<xref ref-type="bibr" rid="B60">60</xref>],&#x20;etc.</p>
</sec>
<sec id="s2-3">
<title>2.3 Spam Detection</title>
<p>Social networking is one of the main channels for people to acquire information. However, the overwhelming spam and network phishing links also bring great troubles to people&#x2019;s work and life. Therefore, how to detect spam on social networks effectively is an important issue. As shown in <xref ref-type="table" rid="T3">Table&#x20;3</xref>, many scholars have proposed various methods to solve this problem in recent&#x20;years.</p>
<table-wrap id="T3" position="float">
<label>TABLE 3</label>
<caption>
<p>Machine learning in spam detection.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Authors</th>
<th align="center">Introduced methods</th>
<th align="center">Year</th>
<th align="center">Datasets</th>
<th align="center">Baseline</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Karakasli et&#x20;al. [<xref ref-type="bibr" rid="B65">65</xref>]</td>
<td align="left">SVM and KNN</td>
<td align="center">2019</td>
<td align="left">Twitter</td>
<td align="left">&#x2014;</td>
</tr>
<tr>
<td align="left">Jain et&#x20;al. [<xref ref-type="bibr" rid="B66">66</xref>]</td>
<td align="left">SSCL</td>
<td align="center">2019</td>
<td align="left">SMS and Twitter</td>
<td align="left">KNN, NB, RF, SVM etc.</td>
</tr>
<tr>
<td align="left">Tajalizadeh et&#x20;al. [<xref ref-type="bibr" rid="B67">67</xref>]</td>
<td align="left">INB-DenStream</td>
<td align="center">2019</td>
<td align="left">Twitter</td>
<td align="left">DenStream, StreamKM&#x2b;&#x2b;, CluStream</td>
</tr>
<tr>
<td align="left">Zhao et&#x20;al. [<xref ref-type="bibr" rid="B68">68</xref>]</td>
<td align="left">Attention &#x2b; GNN</td>
<td align="center">2020</td>
<td align="left">Twitter 1KS-10KN [<xref ref-type="bibr" rid="B69">69</xref>]</td>
<td align="left">GCN, GraphSAGE, and GAT</td>
</tr>
<tr>
<td align="left">Zhang et&#x20;al. [<xref ref-type="bibr" rid="B70">70</xref>]</td>
<td align="left">I2RELM</td>
<td align="center">2020</td>
<td align="left">Twitter</td>
<td align="left">SVM, DT, RF, BP, RBF, ELM, XG-Boost</td>
</tr>
<tr>
<td align="left">Gao et&#x20;al. [<xref ref-type="bibr" rid="B71">71</xref>]</td>
<td align="left">adCGAN</td>
<td align="center">2020</td>
<td align="left">Douban</td>
<td align="left">MCSVM [<xref ref-type="bibr" rid="B72">72</xref>], VAE [<xref ref-type="bibr" rid="B73">73</xref>]</td>
</tr>
<tr>
<td align="left">Zhao et&#x20;al. [<xref ref-type="bibr" rid="B74">74</xref>]</td>
<td align="left">Ensemble Learning</td>
<td align="center">2020</td>
<td align="left">[<xref ref-type="bibr" rid="B75">75</xref>]</td>
<td align="left">CSDNN and WSNN [<xref ref-type="bibr" rid="B76">76</xref>]</td>
</tr>
<tr>
<td align="left">Alom et&#x20;al. [<xref ref-type="bibr" rid="B77">77</xref>]</td>
<td align="left">Text-based &#x26; Combined classifier</td>
<td align="center">2020</td>
<td align="left">Twitter Social Honeypot, Twitter 1KS-10KN [<xref ref-type="bibr" rid="B69">69</xref>]</td>
<td align="left">Blacklist-based Approach [<xref ref-type="bibr" rid="B78">78</xref>]</td>
</tr>
<tr>
<td align="left">Neha et&#x20;al. [<xref ref-type="bibr" rid="B79">79</xref>]</td>
<td align="left">LSTM &#x2b; Attention</td>
<td align="center">2021</td>
<td align="left">Twitter</td>
<td align="left">Bi-LSTM, K Neighbor, Random forest, Decision tree, Naive Bayes</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Karakasli et&#x20;al. [<xref ref-type="bibr" rid="B65">65</xref>] tried to detect spam users with machine learning algorithms. Firstly, they collect twitter user data with software named CRAWLER. Then, 21 features in total was extracted from the original Twitter data. Next, a dynamic feature selection method was used to reduce the model complexity. Finally, they used SVM and KNN algorithm to perform spam user detection, and the success detects rate for KNN was 87.6 and 82.9% for&#x20;SVM.</p>
<p>Aiming at the problem of difficult spam detection caused by the short text and large semantic variability on social networks, by combining the convolutional neural network (CNN) with long short term memory neural network (LSTM), Jain et&#x20;al. [<xref ref-type="bibr" rid="B66">66</xref>] introduced a deep learning spam detection architecture named Sequential Stacked CNN-LSTM (SSCL). Firstly, it uses the CNN network to extract feature sequences from original data, then it feed the feature sequences to the LSTM network, and finally the sigmoid function was used to classify the label as spam or none-spam. They evaluated the performance of SSCL on two dataset: SMS and Twitter, and its precision, accuracy, recall, and F1 score achieved 85.88, 99.01, 99.77, and 99.29%, respectively.</p>
<p>Zhao et&#x20;al. [<xref ref-type="bibr" rid="B68">68</xref>] introduced a semi-supervised graph embedding model to detect spam bot for the directed social network, where they used the attention mechanism and graph neural network to detect spam bot based on the retweet relationship and the following relationship between users. They experimented with the Twitter 1KS-10KN dataset [<xref ref-type="bibr" rid="B69">69</xref>] which was collected on Twitter, compared with GCN, GraphSAGE, and GAT, their method achieved the best performance in Recall, Precision, and F1-score.</p>
<p>Focusing on the uneven distribution of spam data and non-spam data on Twitter, Zhang et&#x20;al. [<xref ref-type="bibr" rid="B70">70</xref>] proposed an algorithm named I2RELM (Improved Incremental Fuzzy-kernel-regularized Extreme Learning Machine), which adopt fuzzy weights (each input data is provided with a weight <italic>s</italic>
<sub>
<italic>i</italic>
</sub>, which is in the interval of (0,1] and assigned by the ratio of spam users to non-spam users in the whole dataset) to improve the detection accuracy of the model on the non-uniformly distributed dataset. They evaluated their method with the data obtained from Twitter, and the performance of I2RELM on the accuracy, TRP, precision, and F-measure was superior to SVM, DT, RF, BP, RBF, ELM, and XG-Boost.</p>
<p>To perform spam detection for movie reviews, Gao et&#x20;al. [<xref ref-type="bibr" rid="B71">71</xref>] proposed an attention mechanism based machine learning model named adCGAN. Firstly, they used SkipGram to extract word vectors from all reviews, and extended SIF algorithm [<xref ref-type="bibr" rid="B80">80</xref>] to generate sentence embedding. Then, they combined the encoded movie features and sentence vectors, and used attention driven generate adversarial network to perform review spam detection. They evaluated their method with the review data collected from Douban, the accuracy of adCGAN achieved 87.3%, which was markedly superior to MCSVM [<xref ref-type="bibr" rid="B72">72</xref>], VAE [<xref ref-type="bibr" rid="B73">73</xref>], and some other baseline algorithms.</p>
<p>Aiming at the problem of class imbalances in the spam detection task, Zhao et&#x20;al. [<xref ref-type="bibr" rid="B74">74</xref>] proposed an ensemble learning framework which based on heterogeneous stacking. Firstly, six different machine learning algorithms including SVM, CART, GNB (Gaussian Naive Bayes), KNN, RF, and LR were used to perform classification tasks separately. Then, feed the output of six machine learning algorithm to cost-sensitive learning based neural network to get the spam detect result. They experimented with the dataset collected by Chen et&#x20;al. [<xref ref-type="bibr" rid="B75">75</xref>], and its performance was markedly superior to CSDNN and WSNN&#x20;[<xref ref-type="bibr" rid="B76">76</xref>].</p>
</sec>
</sec>
<sec id="s3">
<title>3 Security in Machine Learning</title>
<p>The concept of adversarial example was first introduced by Szegedy et&#x20;al. [<xref ref-type="bibr" rid="B81">81</xref>], they found that the machine learning classifier would get completely different results by adding perturbation that hardly perceptible by the human eye to the original picture, Szegedy believes that the discontinuity of mapping between input and output caused by the highly nonlinear machine learning model is the main cause for the existence of adversarial examples. While Goodfellow et&#x20;al. [<xref ref-type="bibr" rid="B82">82</xref>] and Luo et&#x20;al. [<xref ref-type="bibr" rid="B83">83</xref>] believe that the machine learning model are vulnerable to adversarial examples is mainly due to its linear part, in the high-dimensional linear space, the superposition of multiple small perturbations in the network will cause a great change in the output. Glimer et&#x20;al. [<xref ref-type="bibr" rid="B84">84</xref>] believe that adversarial examples are caused by the high dimensionality of the input data, while Ilyas et&#x20;al. [<xref ref-type="bibr" rid="B85">85</xref>] believe that the adversarial example is not bugs but features, since the attributes of the dataset include robustness and non-robustness features, when we delete non-robust features from the original training set, we can obtain a robust model through training, the adversarial examples are generated due to its non-robust features, and have little relation with machine learning algorithms.</p>
<sec id="s3-1">
<title>3.1 Attacks to Machine Learning Models</title>
<p>The generation process of adversarial examples is to mislead the target machine learning model by adding perturbation <italic>&#x3b7;</italic> that are imperceptible to the human eye on the original data, which can be expressed as [<xref ref-type="bibr" rid="B86">86</xref>]:<disp-formula id="e1">
<mml:math id="m1">
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:munder>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:munder>
<mml:mspace width="0.28em"/>
<mml:mi>J</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mtext>&#x2009;s.t.&#x2009;</mml:mtext>
<mml:mspace width="1em"/>
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable class="cases">
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi>&#x3b7;</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mo stretchy="false">&#x2016;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:mi>&#x3b5;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mi>y</mml:mi>
<mml:mo>&#x2260;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mspace width="1em"/>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(1)</label>
</disp-formula>where <inline-formula id="inf1">
<mml:math id="m2">
<mml:mi>J</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the loss function, <inline-formula id="inf2">
<mml:math id="m3">
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the target machine learning model, <italic>x</italic>
<sup>
<italic>adv</italic>
</sup> is the adversarial example, <italic>&#x3b7;</italic> is the adversarial perturbation added to original data <italic>x</italic>, <italic>&#x25b;</italic> is a normal used to limit the size of <italic>&#x3b7;</italic>.</p>
<p>According to the degree of understanding to the target model, attacks to machine learning models can be divided into white-box attacks and black-box attacks. White-box attacker obtains all information such as the structure and parameters of the target model, on the contrary, the black-box attacker know nothing about the structural information of the target model, and can only query the output of the target model based on the input&#x20;[<xref ref-type="bibr" rid="B87">87</xref>].</p>
<sec id="s3-1-1">
<title>3.1.1&#x20;White-Box Attacks</title>
<p>Szegedy et&#x20;al. [<xref ref-type="bibr" rid="B81">81</xref>] first introduced a white-box attack method named L-BFGS, which try to craft adversarial examples by defining the search for the smallest possible attack perturbation as an optimization problem, it can be expressed as:<disp-formula id="e2">
<mml:math id="m4">
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:msub>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msub>
<mml:mi>c</mml:mi>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mi>&#x3b7;</mml:mi>
<mml:mo stretchy="false">&#x2016;</mml:mo>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>J</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mtext>&#x2009;s.t.&#x2009;</mml:mtext>
<mml:mspace width="1em"/>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2208;</mml:mo>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mn>0,1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(2)</label>
</disp-formula>where <italic>c</italic> is a constant, <italic>&#x3b7;</italic> is the perturbation, <inline-formula id="inf3">
<mml:math id="m5">
<mml:mi>J</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the loss function.</p>
<p>Although L-BFGS has a high attack success rate, its computational complexity is expensive; Similar to Szegedy, based on optimization method, Carlini et&#x20;al. [<xref ref-type="bibr" rid="B88">88</xref>] also proposed an adversarial example generate method named C&#x26;W. The research made by Carlini et&#x20;al. showed that the algorithm can effectively attack most of the existing models [<xref ref-type="bibr" rid="B89">89</xref>, <xref ref-type="bibr" rid="B90">90</xref>]; Combining C&#x26;W and Elastic Net, Chen et&#x20;al. [<xref ref-type="bibr" rid="B91">91</xref>] introduced a method named EAD to craft adversarial examples, compared with C&#x26;W, the adversarial examples generated by EAD have stronger transferability.</p>
<p>To reduce the computation complexity of L-BFGS, Goodfellow et&#x20;al. [<xref ref-type="bibr" rid="B82">82</xref>] introduced a method named FGSM (Fast Gradient Sign Method), which is a single-step attack that adds perturbation along the direction of gradient, and the perturbation is calculated as <inline-formula id="inf4">
<mml:math id="m6">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3b5;</mml:mi>
<mml:mi>sign</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mo>&#x2207;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>, where <inline-formula id="inf5">
<mml:math id="m7">
<mml:mi>J</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the loss function, <italic>&#x3b8;</italic> is the parameters of target model, and <italic>&#x25b;</italic> is the size of the perturbation; Based on FGSM, Kurakin et&#x20;al. [<xref ref-type="bibr" rid="B92">92</xref>, <xref ref-type="bibr" rid="B93">93</xref>]) introduced a method named BIM (Basic Iterative Method), which used an iterative method to generate adversarial examples, and they also used the real pictures to evaluate the effectiveness of BIM; Based on BIM, by limiting the size of the perturbation in each iteration strictly, Madry et&#x20;al. [<xref ref-type="bibr" rid="B94">94</xref>] introduced a method named PGD (Projected Gradient Descent), the experiment result shows that the adversarial examples crafted by PGD have better transferability; Similar to Madry et&#x20;al. [<xref ref-type="bibr" rid="B94">94</xref>], Dong et&#x20;al. [<xref ref-type="bibr" rid="B95">95</xref>] introduced a method named MI-FGSM, which integrated the momentum term into the iterative process to craft adversarial examples, compare with BIM, the MI-FGSM can effectively escape from poor local maxima during the iterations.</p>
<p>In order to find the minimal perturbations that are sufficient to mislead the target machine learning model, based on the iterative linearization of the classifier, Moosavi-Dezfooli et&#x20;al. [<xref ref-type="bibr" rid="B96">96</xref>] proposed the DeepFool algorithm, which helps the attacker to craft adversarial examples with minimal perturbations.</p>
<p>Without calculating the gradient of the target model, Baluja et&#x20;al. [<xref ref-type="bibr" rid="B97">97</xref>] introduced a method named ATN (Adversarial Transformation Network) to perform white-box or black-box attacks by training an adversarial transformer network, which can transformer the input data into the target or untargeted adversarial examples. Similar to ATN, Xiao et&#x20;al. [<xref ref-type="bibr" rid="B98">98</xref>] introduced advGan to craft adversarial examples, based on generative adversarial network, the generator of advGAN is used to generate the perturbation to the input data, and the discriminator is used to distinguish the original data from adversarial examples generated by the generator. Besides, Bai et&#x20;al. [<xref ref-type="bibr" rid="B99">99</xref>] proposed AI-GAN to crafted adversarial examples. The above methods [<xref ref-type="bibr" rid="B97">97</xref>&#x2013;<xref ref-type="bibr" rid="B99">99</xref>] only need to query the target model during the stage of model training stage, which is fast and efficient.</p>
<p>To find the strongest attack policy, Mao et&#x20;al. [<xref ref-type="bibr" rid="B100">100</xref>] proposed Composite Adversarial Attacks (CAA). They adopted the NSGA-II genetic algorithm to find the best combination of attack algorithms from a candidate pool composed of 32 base attackers. The attack policy of CAA can be expressed as:<disp-formula id="e3">
<mml:math id="m8">
<mml:mfenced open="" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>:</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="script">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="script">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="script">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">F</mml:mi>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">F</mml:mi>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">F</mml:mi>
<mml:mo>;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(3)</label>
</disp-formula>where <inline-formula id="inf6">
<mml:math id="m9">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is one of the attack algorithm in attack pool, <inline-formula id="inf7">
<mml:math id="m10">
<mml:msub>
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> and <inline-formula id="inf8">
<mml:math id="m11">
<mml:msub>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> is the hyperparameter of <inline-formula id="inf9">
<mml:math id="m12">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="script">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>, <inline-formula id="inf10">
<mml:math id="m13">
<mml:mi mathvariant="script">F</mml:mi>
</mml:math>
</inline-formula> is the target&#x20;model.</p>
</sec>
<sec id="s3-1-2">
<title>3.1.2&#x20;Black-Box Attacks</title>
<p>During the processes of black-box attack, the attacker know nothing about the target model, and the mainstream approach is based on gradient estimation and substitute&#x20;model.</p>
<sec id="s3-1-2-1">
<title>3.1.2.1 Based on Gradient Estimation</title>
<p>In this scenario, the attacker estimates the gradient information of the target model by feeding data into the target model and querying its output. Chen et&#x20;al. [<xref ref-type="bibr" rid="B101">101</xref>] extended the C&#x26;W [<xref ref-type="bibr" rid="B88">88</xref>] algorithm and proposed Zeroth Order Optimization (ZOO) algorithm to perform black-box adversarial examples generation. Although the ZOO algorithm has a high success rate in generating adversarial examples, it requires a large amount of queries on the target model. To reduce the number of queries to the target model, Ilyas et&#x20;al. [<xref ref-type="bibr" rid="B102">102</xref>] used the variant of NES algorithm [<xref ref-type="bibr" rid="B103">103</xref>] to estimate the gradient of the target model, which significantly reduces the query complexity to the target model. Tu et&#x20;al. [<xref ref-type="bibr" rid="B104">104</xref>] proposed a framework named AutoZOOM, which adopts an adaptive random gradient estimation strategy and dimension reduction techniques to reduce the query count, compared with the ZOO [<xref ref-type="bibr" rid="B101">101</xref>], under the premise of achieving the same attack effect, AutoZOOM can significantly reduce the query complexity. Du et el [<xref ref-type="bibr" rid="B105">105</xref>]. also train a meta attacker mode to reduce the query count. Bai et&#x20;al. [<xref ref-type="bibr" rid="B106">106</xref>] proposed the NP-attack algorithm, which also greatly reduces the query complexity by exploring the distribution of adversarial examples around benign inputs. Besides, Chen et&#x20;al. [<xref ref-type="bibr" rid="B107">107</xref>] proposed the HopSkipJumpAttack algorithm, which applies binary information at the decision boundary to estimate gradient direction.</p>
</sec>
<sec id="s3-1-2-2">
<title>3.1.2.2 Based on Substitute Model</title>
<p>Based on the transferability of the adversarial examples, the attack usually trains a substitute model and uses the white-box attack algorithm to craft adversarial examples on the substitute model. Papernot et&#x20;al. [<xref ref-type="bibr" rid="B108">108</xref>] first used substitute model to generate adversarial examples. Their research also shows that the attacker can perform black-box attack based on the transferability of adversarial examples, even if the structure of the substitute model is completely different from the target model. Zhou et&#x20;al. [<xref ref-type="bibr" rid="B109">109</xref>] proposed a data-free substitute model train method (DaST) to train a substitute model for adversarial attack without any real data. By efficiently using the gradient of the substitute model, Ma et&#x20;al. [<xref ref-type="bibr" rid="B110">110</xref>] proposed a highly query-efficient black-box adversarial attack model named SWITCH. Zhu et&#x20;al. [<xref ref-type="bibr" rid="B111">111</xref>] used the PCIe bus to learn the information of machine learning models in the model-privatization deployments and proposed the Hermes Attack algorithm to fully reconstruct the target machine learning model. By focusing on the training strategy of the substitute model on the data distributed near the decision boundary, Wang et&#x20;al. [<xref ref-type="bibr" rid="B112">112</xref>] improve the transferability of adversarial examples between the substitute model and the target model significantly. Based on meta-learning, Ma et&#x20;al. [<xref ref-type="bibr" rid="B113">113</xref>] train a generalized substitute model named Simulator to mimic any unknown target model, which significantly reduces the query complexity to the target&#x20;model.</p>
</sec>
</sec>
</sec>
<sec id="s3-2">
<title>3.2 Defense Against Adversarial Examples</title>
<p>The defense of adversarial examples is an important component of machine learning security. Many scholars have also proposed different adversarial example defense strategies in recent years. The strategies are divided into input data transformer, adversarial example detection, and model robust enhance.</p>
<sec id="s3-2-1">
<title>3.2.1 Input Data Transformer</title>
<p>Since perturbation of adversarial examples are usually visually imperceptible, by compressing away these pixel manipulation, Das et&#x20;al. [<xref ref-type="bibr" rid="B114">114</xref>] introduced a defense framework based on JPEG compression. Cheng et&#x20;al. [<xref ref-type="bibr" rid="B115">115</xref>] adopt a self-adaptive JPEG compression algorithm to defend against adversarial attacks of the&#x20;video.</p>
<p>Based on generative adversarial network, Samangouei et&#x20;al. [<xref ref-type="bibr" rid="B116">116</xref>] introduced the Defense-GAN to defend against adversarial attacks. By learning the distribution of unperturbed images, the Defense-GAN can generate the clean sample that approximates the perturbed images. Although the Defense-GAN could defend against most commonly attack strategies, its hyper-parameters is hard to train. Hwang et&#x20;al. [<xref ref-type="bibr" rid="B117">117</xref>] also introduced a Purifying Variational Autoencoder (PuVAE) to purify adversarial examples, which is 130&#x20;times faster than Defense-GAN [<xref ref-type="bibr" rid="B116">116</xref>] in inference time. Besides, Lin et&#x20;al. [<xref ref-type="bibr" rid="B118">118</xref>] introduced InvGAN to speed up Defense-GAN [<xref ref-type="bibr" rid="B116">116</xref>]. Zhang et&#x20;al. [<xref ref-type="bibr" rid="B119">119</xref>] proposed an image reconstruction network based on residual blocks to reconstruct adversarial examples into clean images, in addition, adding random resize and random pad layer to the end of the reconstruction network is very effective in eliminating the perturbations introduced by iterative attacks.</p>
</sec>
<sec id="s3-2-2">
<title>3.2.2 Adversarial Example Detection</title>
<p>Just as the name implies, the adversarial example detection algorithms enhance the robustness of the machine learning model by filtering out adversarial examples in a large number of data sets, it detects adversarial examples mainly by learning the differences in characteristics and distribution between the adversarial examples and the normal&#x20;data.</p>
<p>Among many works, Liu et&#x20;al. [<xref ref-type="bibr" rid="B120">120</xref>] used the gradient amplitude to estimate the probability of modifications caused by adversarial attacks and applied steganalysis to detect adversarial examples. The experiment indicated that their method can accurately detect adversarial examples crafted by FGSM [<xref ref-type="bibr" rid="B82">82</xref>], BIM [<xref ref-type="bibr" rid="B92">92</xref>], DeepFool [<xref ref-type="bibr" rid="B96">96</xref>], and C&#x26;W [<xref ref-type="bibr" rid="B88">88</xref>]. Wang et&#x20;al. [<xref ref-type="bibr" rid="B121">121</xref>] proposed a SmsNet to detect adversarial examples, which introduced a &#x201c;SmsConnection&#x201d; to extract statistical features and proposed a dynamic pruning strategy to prevent overfitting. The experiment indicated that the performance of SmsNet was superior to ESRAM (Enhanced Spatial Rich Model) [<xref ref-type="bibr" rid="B120">120</xref>] on detecting adversarial examples crafted by various attacks algorithms.</p>
<p>Noticing the sensitivity of adversarial examples to the fluctuations occurring at the highly-curved region of the decision boundary, Tian et&#x20;al. [<xref ref-type="bibr" rid="B122">122</xref>] proposed Sensitivity Inconsistency Detector (SID) to detect adversarial examples, which achieved detection performance in detecting adversarial examples with small perturbation. Besides, based on the feature that adversarial examples are more sensitive to channel transformation operations than clean examples, Chen et&#x20;al. [<xref ref-type="bibr" rid="B123">123</xref>] proposed a light-weighted adversarial examples detector based on adaptive channel transformation named ACT-Detector. The experiments show that the ADC-detector can defend against most adversarial attacks.</p>
<p>To lessen the dependence on prior knowledge of attacks algorithms, Sutanto et&#x20;al. [<xref ref-type="bibr" rid="B124">124</xref>] proposed a Deep Image Prior (DIP) network to detect adversarial examples, they used a blurring network as the initial condition to train the DIP network only using normal noiseless images. In addition, it is applicable for real-time AI systems due to its faster detection speed for real images. Liang et&#x20;al. [<xref ref-type="bibr" rid="B125">125</xref>] consider the perturbation crafted by adversarial attacks as a kind of noise, They use scalar quantization and smoothing spatial filter to implement an adaptive noise reduction for input images.</p>
</sec>
<sec id="s3-2-3">
<title>3.2.3 Model Robust Enhancement</title>
<p>Model robust enhancement mainly includes adversarial training and certified training. Adversarial training improves the model&#x2019;s immunity to adversarial examples by adding adversarial examples in its training set [<xref ref-type="bibr" rid="B126">126</xref>]. Certified training enhances model robustness by constraining the output space of each layer of the neural network under specific inputs during the training process.</p>
<sec id="s3-2-3-1">
<title>3.2.3.1 Adversarial Training</title>
<p>Adversarial training is one of the effective methods to defend against the attacks from adversarial example, the process of adversarial training can be approximated by the following minimum-maximum optimization problem [<xref ref-type="bibr" rid="B94">94</xref>].<disp-formula id="e4">
<mml:math id="m14">
<mml:munder>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:munder>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="double-struck">E</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="script">D</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:munder>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3b4;</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mi>L</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b4;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(4)</label>
</disp-formula>where <italic>D</italic> is the set of training data, <inline-formula id="inf11">
<mml:math id="m15">
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the target neural network, <italic>&#x3b8;</italic> is the parameter of <inline-formula id="inf12">
<mml:math id="m16">
<mml:mi>f</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula>, <italic>L</italic> is the loss function, and <italic>&#x3b4;</italic> is the adversarial perturbation.</p>
<p>Szegedy et&#x20;al. [<xref ref-type="bibr" rid="B81">81</xref>] first introduced the concept of adversarial training by training the neural network on the dataset composed of clean data and adversarial examples. Goodfellow et&#x20;al. [<xref ref-type="bibr" rid="B82">82</xref>] also tried to enhance the robustness of the machine learning model by adding adversarial examples crafted by FGSM algorithm to the training set. Although it is more effective in defending against attacks from the FGSM algorithm, it is helpless against attacks from more aggressive algorithms such as C&#x26;W [<xref ref-type="bibr" rid="B88">88</xref>] and PGD [<xref ref-type="bibr" rid="B94">94</xref>]. Madry et&#x20;al. [<xref ref-type="bibr" rid="B94">94</xref>] tried to enhance the robustness of neural networks from the lens of robust optimization, they used the saddle point formula to optimize the parameters of the network model, thereby reducing the loss of the model on adversarial examples. Although adversarial training with PGD [<xref ref-type="bibr" rid="B94">94</xref>] algorithm can significantly enhance the robustness of the model, the computational complexity is very expensive when training on large-scale datasets.</p>
<p>To reduce the computational complexity of adversarial training, Shafahi et&#x20;al. [<xref ref-type="bibr" rid="B127">127</xref>] introduced a speed adversarial training method by updating both the model parameters and images perturbations for each update step, and its training speed is 3&#x2013;30&#x20;times than that of PGD [<xref ref-type="bibr" rid="B94">94</xref>]. Zhang et&#x20;al. [<xref ref-type="bibr" rid="B128">128</xref>] found that the adversarial perturbation was only coupled with the first layer of the neural network, based on this, they proposed an adversarial training algorithm named YOPO(You Only Propagate Once), by focusing adversary computation only on the input layer of the neural network, experiment indicated that the training efficiency of YOPO was 4&#x2013;5&#x20;times than that of original PGD training [<xref ref-type="bibr" rid="B94">94</xref>]. Besides, the research of Wong et&#x20;al. [<xref ref-type="bibr" rid="B129">129</xref>] shown that the combination of FGSM [<xref ref-type="bibr" rid="B82">82</xref>] and random initialization in adversarial training can significantly reduce training cost while achieving similar effects to original PDG training&#x20;[<xref ref-type="bibr" rid="B94">94</xref>].</p>
</sec>
<sec id="s3-2-3-2">
<title>3.2.3.2 Certified Training</title>
<p>Gowal et&#x20;al. [<xref ref-type="bibr" rid="B130">130</xref>] proved that the robustness to PGD [<xref ref-type="bibr" rid="B94">94</xref>] attack was not a true measure of robustness. They focus on research on formal verification, and they proposed a neural network verified training algorithm named IBP (Interval Bound Propagation). Although the IBP algorithm is not only computationally cheaper but also significantly reduce the verified error rate, its training process is unstable, especially in the initial stages of training, to enhance the stability of IBP. Zhang et&#x20;al. [<xref ref-type="bibr" rid="B131">131</xref>] combined the IBP [<xref ref-type="bibr" rid="B130">130</xref>] algorithm and the tight linear relaxation algorithm named CROWN [<xref ref-type="bibr" rid="B132">132</xref>], and proposed a verified training algorithm named IBP-CROWM. The experiment results shown that both standard errors and verified errors of IBP-CROWN were outperforming than IBP [<xref ref-type="bibr" rid="B130">130</xref>].</p>
</sec>
</sec>
</sec>
</sec>
<sec id="s4">
<title>4 Security of Machine Learning in Social Networks</title>
<p>Most of the existing researches related to adversarial examples are focusing on the field of image classification. However, the generation of adversarial examples in social networks needs to process data like text and graph, unlike images, text and graph are discrete in feature distribution, which makes it more difficult to craft adversarial examples with text or graph. In this section, we mainly review the researches of adversarial examples in sentiment analysis (SA), spam detection (SD), and recommendation systems (RS), as well as some researches on question and answer robot and neural machine translation. Since the data used in sentiment analysis and spam detection are both texts, they are similar in the generation and defense of examples, we will review them in one subsection, due to the relative lack of research on question and answering robots and neural machine translation, we will review them in one subsection. <xref ref-type="table" rid="T4">Table&#x20;4</xref> and <xref ref-type="table" rid="T5">Table&#x20;5</xref> has shown some algorithms in sentiment analysis, spam detection, and machine translation of adversarial example generation and defense studies in recent&#x20;years.</p>
<table-wrap id="T4" position="float">
<label>TABLE 4</label>
<caption>
<p>Attack to machine learning in social network.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th rowspan="2" align="left">Authors</th>
<th rowspan="2" align="center">Year</th>
<th rowspan="2" align="center">Method</th>
<th rowspan="2" align="center">Dataset</th>
<th rowspan="2" align="center">Baseline</th>
<th colspan="2" align="center">Attack type</th>
<th colspan="3" align="center">Aspect</th>
</tr>
<tr>
<th align="center">Black-box</th>
<th align="center">White-box</th>
<th align="center">SA</th>
<th align="center">SD</th>
<th align="center">RS</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Gao et&#x20;al. [<xref ref-type="bibr" rid="B133">133</xref>]</td>
<td align="center">2018</td>
<td align="left">DeepWordBug</td>
<td align="left">Enron spam emails, IMDB</td>
<td align="left">Projected FGSM, Random &#x2b; DeepWordBug Transformer</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Vijayaraghavan et&#x20;al. [<xref ref-type="bibr" rid="B134">134</xref>]</td>
<td align="center">2019</td>
<td align="left">AEG</td>
<td align="left">IMBA, AG News</td>
<td align="left">DeepWordBug [<xref ref-type="bibr" rid="B133">133</xref>], NMT-BT [<xref ref-type="bibr" rid="B135">135</xref>]</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Ren et&#x20;al. [<xref ref-type="bibr" rid="B136">136</xref>]</td>
<td align="center">2020</td>
<td align="left">Lage Scale Adversarial Attack</td>
<td align="left">IMBA, Rotten Tomatoes Movie Reviews</td>
<td align="left">FGSM [<xref ref-type="bibr" rid="B82">82</xref>], DeepFool [<xref ref-type="bibr" rid="B96">96</xref>], Textbugger [<xref ref-type="bibr" rid="B137">137</xref>]</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Li et&#x20;al. [<xref ref-type="bibr" rid="B138">138</xref>]</td>
<td align="center">2020</td>
<td align="left">BERT-Attack</td>
<td align="left">AG News, IMDB, Yelp, FAKE, SNLI, MNLI</td>
<td align="left">TextFooler [<xref ref-type="bibr" rid="B139">139</xref>], Genetic attack [<xref ref-type="bibr" rid="B140">140</xref>]</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Nuo et&#x20;al. [<xref ref-type="bibr" rid="B141">141</xref>]</td>
<td align="center">2020</td>
<td align="left">WordChange</td>
<td align="left">Ctrip, <ext-link ext-link-type="uri" xlink:href="http://JD.com">JD.com</ext-link>
</td>
<td align="left">TF-IDF, TextRank</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Li et&#x20;al. [<xref ref-type="bibr" rid="B142">142</xref>]</td>
<td align="center">2020</td>
<td align="left">CLARE</td>
<td align="left">Yelp, AG News, MNLI, QNLI</td>
<td align="left">TextFooler [<xref ref-type="bibr" rid="B139">139</xref>], TextFooler &#x2b; LM, BERTAttack</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Garg et&#x20;al. [<xref ref-type="bibr" rid="B143">143</xref>]</td>
<td align="center">2020</td>
<td align="left">BAE</td>
<td align="left">Amazon Yelp, IMDB, MR</td>
<td align="left">TextFooler [<xref ref-type="bibr" rid="B139">139</xref>]</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Jin et&#x20;al. [<xref ref-type="bibr" rid="B139">139</xref>]</td>
<td align="center">2020</td>
<td align="left">TextFooler</td>
<td align="left">AG News, FAKER, MR, Yelp, IMDB</td>
<td align="left">Textbugger [<xref ref-type="bibr" rid="B137">137</xref>]</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Maheshwary et&#x20;al. [<xref ref-type="bibr" rid="B144">144</xref>]</td>
<td align="center">2021</td>
<td align="left">Hard Label Attack</td>
<td align="left">AG News, Yahoo Answers, MR, IMDB, Yelp, SNLI, MNLI</td>
<td align="left">TextFooler [<xref ref-type="bibr" rid="B139">139</xref>], PSO [<xref ref-type="bibr" rid="B145">145</xref>], AEG [<xref ref-type="bibr" rid="B134">134</xref>]</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Yang et&#x20;al. [<xref ref-type="bibr" rid="B146">146</xref>]</td>
<td align="center">2017</td>
<td align="left">Co-visitation attack</td>
<td align="left">YouTube, eBay, Amazon, Yelp</td>
<td align="left">Popular-item-attack, Random-item-attack</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Fang et&#x20;al. [<xref ref-type="bibr" rid="B147">147</xref>]</td>
<td align="center">2018</td>
<td align="left">Graph Poisoning Attack</td>
<td align="left">MovieLens-100K, Amazon Instant Video</td>
<td align="left">Co-visitation attack [<xref ref-type="bibr" rid="B146">146</xref>]</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Christakopoulou et&#x20;al. [<xref ref-type="bibr" rid="B148">148</xref>]</td>
<td align="center">2019</td>
<td align="left">Oblivious Recommender System Attack</td>
<td align="left">MovieLens-100K, MovieLens-1M</td>
<td align="left">&#x2014;</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Sun et&#x20;al. [<xref ref-type="bibr" rid="B149">149</xref>]</td>
<td align="center">2020</td>
<td align="left">NIPA</td>
<td align="left">Cora, Citeseer, Pumbed</td>
<td align="left">Random, Preferential, PGA</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Song et&#x20;al. [<xref ref-type="bibr" rid="B150">150</xref>]</td>
<td align="center">2020</td>
<td align="left">PoisonRec</td>
<td align="left">Steam, MovieLens-1M and Amazon</td>
<td align="left">Popular Attack, Random Attack, Middle Attack, Power Item Attack, ConsLOP</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Chang et&#x20;al. [<xref ref-type="bibr" rid="B151">151</xref>]</td>
<td align="center">2020</td>
<td align="left">GF-Attack</td>
<td align="left">Cora, Citeseer, Pubmed</td>
<td align="left">Random, Degree, RL-S2V, Aclass</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Fang et&#x20;al. [<xref ref-type="bibr" rid="B152">152</xref>]</td>
<td align="center">2020</td>
<td align="left">TNA</td>
<td align="left">Yelp, Amazon, Digital Music</td>
<td align="left">PGA [<xref ref-type="bibr" rid="B153">153</xref>], SGLD [<xref ref-type="bibr" rid="B153">153</xref>]</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Lin et&#x20;al. [<xref ref-type="bibr" rid="B154">154</xref>]</td>
<td align="center">2020</td>
<td align="left">AUSH</td>
<td align="left">MovieLens-100K, Amazon, FilmTrus</td>
<td align="left">Random, Segment, Bandwagon, DCGAN</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Fan et&#x20;al. [<xref ref-type="bibr" rid="B155">155</xref>]</td>
<td align="center">2021</td>
<td align="left">CopyAttack</td>
<td align="left">MovieLens-10M &#x26; Flixster, MovieLens-20M &#x26; Netflix</td>
<td align="left">RL-Generative, RandomAttack, TargetAttack</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Zhan et&#x20;al. [<xref ref-type="bibr" rid="B156">156</xref>]</td>
<td align="center">2021</td>
<td align="left">BBGA</td>
<td align="left">Cora, Citesser, Cora-ML</td>
<td align="left">DICE-BB, Random, Mettack, Aclass</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Finkelshtein et&#x20;al. [<xref ref-type="bibr" rid="B157">157</xref>]</td>
<td align="center">2021</td>
<td align="left">Single-Node Attack</td>
<td align="left">Cora, CiteSeer, PubMed, Twitter-Hateful-Users</td>
<td align="left">EdgeGrad</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Huang et&#x20;al. [<xref ref-type="bibr" rid="B158">158</xref>]</td>
<td align="center">2021</td>
<td align="left">Poisoning Attack</td>
<td align="left">Movielens-100K, Movielens-1M, Last.fm</td>
<td align="left">Random, Bandwagon, MF</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
<tr>
<td align="left">Wu et&#x20;al. [<xref ref-type="bibr" rid="B159">159</xref>]</td>
<td align="center">2021</td>
<td align="left">TrialAttack</td>
<td align="left">Movielens-100K, Movielens-1M, FilmTrust</td>
<td align="left">Random, Average, PGA [<xref ref-type="bibr" rid="B153">153</xref>], TNA [<xref ref-type="bibr" rid="B152">152</xref>], AUSH [<xref ref-type="bibr" rid="B154">154</xref>]</td>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
<td align="left"/>
<td align="left"/>
<td align="center">
<italic>&#x2713;</italic>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T5" position="float">
<label>TABLE 5</label>
<caption>
<p>Defend against to adversarial examples in social network.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th rowspan="2" align="left">Authors</th>
<th rowspan="2" align="center">Year</th>
<th rowspan="2" align="center">Method</th>
<th rowspan="2" align="center">Dataset</th>
<th rowspan="2" align="center">Baselines</th>
<th rowspan="2" align="center">Attacks</th>
<th colspan="3" align="center">Aspect</th>
</tr>
<tr>
<th align="center">SA</th>
<th align="center">SD</th>
<th align="center">RS</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Pruthi et&#x20;al. [<xref ref-type="bibr" rid="B160">160</xref>]</td>
<td align="center">2019</td>
<td align="left">Robust Word Recognition</td>
<td align="left">SST, IMBA, Stanford Sentiment Treebank</td>
<td align="left">data augmentation [<xref ref-type="bibr" rid="B154">154</xref>], adversarial training [<xref ref-type="bibr" rid="B82">82</xref>]</td>
<td align="left">Swap, Drop, Keyboard, Add</td>
<td align="center">&#x2713;</td>
<td align="center">&#x2713;</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Jia et&#x20;al. [<xref ref-type="bibr" rid="B161">161</xref>]</td>
<td align="center">2019</td>
<td align="left">Certified Robustness Rraining</td>
<td align="left">IMDB, SNLI</td>
<td align="left">Standard Training, Data Augmentation</td>
<td align="left">Genetic attack [<xref ref-type="bibr" rid="B140">140</xref>]</td>
<td align="center">&#x2713;</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Zhou et&#x20;al. [<xref ref-type="bibr" rid="B162">162</xref>]</td>
<td align="center">2019</td>
<td align="left">DISP</td>
<td align="left">SST-2, IMDB</td>
<td align="left">Adversarial Data Augmentation (ADA), Adversarial Training (AT), Adversarial Training (AT)</td>
<td align="left">Insertion, Deletion, Swap, Random, Embed</td>
<td align="center">&#x2713;</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Si et&#x20;al. [<xref ref-type="bibr" rid="B163">163</xref>]</td>
<td align="center">2020</td>
<td align="left">AMDA</td>
<td align="left">SST-2, IMDB</td>
<td align="left">Adversarial Data Augmentation (ADA)</td>
<td align="left">TextFooler [<xref ref-type="bibr" rid="B139">139</xref>], PWWS [<xref ref-type="bibr" rid="B164">164</xref>]</td>
<td align="center">&#x2713;</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Wang et&#x20;al. [<xref ref-type="bibr" rid="B165">165</xref>]</td>
<td align="center">2020</td>
<td align="left">MUDE</td>
<td align="left">Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993)</td>
<td align="left">Enchant 3 spell checker, scRNN</td>
<td align="left">Permutation, Insertion, Deletion, Substitution</td>
<td align="center">&#x2713;</td>
<td align="center">&#x2713;</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Shi et&#x20;al. [<xref ref-type="bibr" rid="B166">166</xref>]</td>
<td align="center">2020</td>
<td align="left">Transformers Robustness Verify</td>
<td align="left">Yelp, SST</td>
<td align="left">IBP</td>
<td align="left">&#x2014;</td>
<td align="center">&#x2713;</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Ye et&#x20;al. [<xref ref-type="bibr" rid="B167">167</xref>]</td>
<td align="center">2020</td>
<td align="left">Safer</td>
<td align="left">IMDB, Amazon</td>
<td align="left">Certified Robustness Rraining [<xref ref-type="bibr" rid="B161">161</xref>], IBP</td>
<td align="left">Genetic attack [<xref ref-type="bibr" rid="B140">140</xref>]</td>
<td align="center">&#x2713;</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Mozes et&#x20;al. [<xref ref-type="bibr" rid="B168">168</xref>]</td>
<td align="center">2020</td>
<td align="left">FGWS</td>
<td align="left">SST-2, IMDB</td>
<td align="left">DISP [<xref ref-type="bibr" rid="B162">162</xref>]</td>
<td align="left">Genetic attack [<xref ref-type="bibr" rid="B140">140</xref>], PWWS [<xref ref-type="bibr" rid="B164">164</xref>]</td>
<td align="center">&#x2713;</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Zeng et&#x20;al. [<xref ref-type="bibr" rid="B169">169</xref>]</td>
<td align="center">2021</td>
<td align="left">RanMASK</td>
<td align="left">AG News, SST-2</td>
<td align="left">Safer [<xref ref-type="bibr" rid="B167">167</xref>]</td>
<td align="left">TextFooler [<xref ref-type="bibr" rid="B139">139</xref>], Bert-Attack [<xref ref-type="bibr" rid="B138">138</xref>], DeepWordBug [<xref ref-type="bibr" rid="B133">133</xref>]</td>
<td align="center">&#x2713;</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Wang et&#x20;al. [<xref ref-type="bibr" rid="B170">170</xref>]</td>
<td align="center">2021</td>
<td align="left">TextFirewall</td>
<td align="left">IMBA, Yelp</td>
<td align="left">Adversarial Training, Spelling Check and Recovery (SCR), RSE</td>
<td align="left">Deepwordbug [<xref ref-type="bibr" rid="B133">133</xref>], Genetic attack [<xref ref-type="bibr" rid="B140">140</xref>], PWWS [<xref ref-type="bibr" rid="B164">164</xref>]</td>
<td align="center">&#x2713;</td>
<td align="center">&#x2713;</td>
<td align="left"/>
</tr>
<tr>
<td align="left">Karimi et&#x20;al. [<xref ref-type="bibr" rid="B171">171</xref>]</td>
<td align="center">2021</td>
<td align="left">BAT</td>
<td align="left">SemEval 2014 task 4, SemEval 2016 task 5</td>
<td align="left">BERT [<xref ref-type="bibr" rid="B172">172</xref>]</td>
<td align="left">Gradient attack [<xref ref-type="bibr" rid="B173">173</xref>]</td>
<td align="center">&#x2713;</td>
<td align="left"/>
<td align="left"/>
</tr>
<tr>
<td align="left">Du et&#x20;al. [<xref ref-type="bibr" rid="B174">174</xref>]</td>
<td align="center">2019</td>
<td align="left">FNCF</td>
<td align="left">Movielens-100K, Movielens-1M</td>
<td align="left">Distillation [<xref ref-type="bibr" rid="B175">175</xref>]</td>
<td align="left">C&#x26;W [<xref ref-type="bibr" rid="B88">88</xref>]</td>
<td align="left"/>
<td align="left"/>
<td align="center">&#x2713;</td>
</tr>
<tr>
<td align="left">Tang et&#x20;al. [<xref ref-type="bibr" rid="B176">176</xref>]</td>
<td align="center">2019</td>
<td align="left">AMR</td>
<td align="left">Pinterest, Amazon</td>
<td align="left">POP, MF-BPR, DUIF, VBPR</td>
<td align="left">FGSM [<xref ref-type="bibr" rid="B82">82</xref>]</td>
<td align="left"/>
<td align="left"/>
<td align="center">&#x2713;</td>
</tr>
<tr>
<td align="left">Manotumruksa et&#x20;al. [<xref ref-type="bibr" rid="B177">177</xref>]</td>
<td align="center">2020</td>
<td align="left">SAO</td>
<td align="left">MovieLens, Beauty, Video, Foursquare, Brightkite, Yelp</td>
<td align="left">MostPop, BPR, APR, SASRec, ASASRec</td>
<td align="left">&#x2014;</td>
<td align="left"/>
<td align="left"/>
<td align="center">&#x2713;</td>
</tr>
<tr>
<td align="left">Li et&#x20;al. [<xref ref-type="bibr" rid="B178">178</xref>]</td>
<td align="center">2020</td>
<td align="left">SACRA</td>
<td align="left">Yelp, Foursquare</td>
<td align="left">WRMF, MMMF, BPRMF, CofiRank, CLiMF, USG, GeoMF, etc.</td>
<td align="left">FGSM [<xref ref-type="bibr" rid="B82">82</xref>]</td>
<td align="left"/>
<td align="left"/>
<td align="center">&#x2713;</td>
</tr>
<tr>
<td align="left">Wang et&#x20;al. [<xref ref-type="bibr" rid="B179">179</xref>]</td>
<td align="center">2020</td>
<td align="left">ATMBPR</td>
<td align="left">Movielens-100K, Yelp</td>
<td align="left">BPR, CDAE, MPR, AMF, MLP, NeuMF, LRML, JRL, etc.</td>
<td align="left">FGSM [<xref ref-type="bibr" rid="B82">82</xref>]</td>
<td align="left"/>
<td align="left"/>
<td align="center">&#x2713;</td>
</tr>
<tr>
<td align="left">Shahrasbi et&#x20;al. [<xref ref-type="bibr" rid="B180">180</xref>]</td>
<td align="center">2020</td>
<td align="left">Semi-Supervised Attack Detection</td>
<td align="left">Instacart grocery</td>
<td align="left">LSTM</td>
<td align="left">&#x2014;</td>
<td align="left"/>
<td align="left"/>
<td align="center">&#x2713;</td>
</tr>
<tr>
<td align="left">Wu et&#x20;al. [<xref ref-type="bibr" rid="B181">181</xref>]</td>
<td align="center">2021</td>
<td align="left">APT</td>
<td align="left">FilmTrust, MovieLens-100K, MovieLens-1M, Yelp</td>
<td align="left">Adversarial Training (AT), PCMF</td>
<td align="left">AUSH [<xref ref-type="bibr" rid="B154">154</xref>], TNA [<xref ref-type="bibr" rid="B152">152</xref>], PGA [<xref ref-type="bibr" rid="B153">153</xref>]</td>
<td align="left"/>
<td align="left"/>
<td align="center">&#x2713;</td>
</tr>
<tr>
<td align="left">Yi et&#x20;al. [<xref ref-type="bibr" rid="B182">182</xref>]</td>
<td align="center">2021</td>
<td align="left">DAVE</td>
<td align="left">Yelp, Digital Music, MovieLens-1M, MovieLens-100K, Pinterest</td>
<td align="left">NeuMF, CDAE, CFGAN, APR, ACAE, AVB, VAEGAN, CVAE-GAN, RecVAE</td>
<td align="left">AAE</td>
<td align="left"/>
<td align="left"/>
<td align="center">&#x2713;</td>
</tr>
</tbody>
</table>
</table-wrap>
<sec id="s4-1">
<title>4.1 Security in Sentiment Analysis and Spam Detection</title>
<sec id="s4-1-1">
<title>4.1.1 Adversarial Attacks</title>
<p>The goal of the attack against sentiment analysis and spam detection system is to craft a text <italic>x</italic>&#x2032; that is semantically similar to the original text <italic>x</italic> but can mislead the target classifier. It can be expressed as:<disp-formula id="e5">
<mml:math id="m17">
<mml:munder>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:munder>
<mml:mi>S</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="1em"/>
<mml:mtext>&#x2009;s.t.&#x2009;</mml:mtext>
<mml:mspace width="1em"/>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2260;</mml:mo>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(5)</label>
</disp-formula>where function <inline-formula id="inf13">
<mml:math id="m18">
<mml:mi>S</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is used to compute the semantic similarity between <italic>x</italic> and <italic>x</italic>&#x2032;, <inline-formula id="inf14">
<mml:math id="m19">
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the target&#x20;model.</p>
<p>The adversarial example generation algorithm for texts is mainly by finding the keywords in the whole sentence that have a greater impact on the classification results and then adding perturbation to these keywords.</p>
<p>Gao et&#x20;al. [<xref ref-type="bibr" rid="B133">133</xref>] proposed an effectively black-box text adversarial example generate method named DeepWordBug, and they introduced temporal tail score (TTS) and temporal score (TS) to evaluate the importance of words in the sentence. According to the authors, firstly, they calculate the TTS and TS by querying the output of the target model after shielding some words, and then combine the value of TTS and TS to calculate the importance of every word in the whole sentence. It can be expressed as:<disp-formula id="e6">
<mml:math id="m20">
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mo movablelimits="false" form="prefix">TS</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mo movablelimits="false" form="prefix">TTS</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mi>S</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="normal">T</mml:mi>
<mml:mi mathvariant="normal">S</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="normal">T</mml:mi>
<mml:mi mathvariant="normal">T</mml:mi>
<mml:mi mathvariant="normal">S</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(6)</label>
</disp-formula>where, <inline-formula id="inf15">
<mml:math id="m21">
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is target machine learning model, and <italic>x</italic>
<sub>
<italic>i</italic>
</sub> is the i-th word in the sentence. Finally, they modify some characters in the keywords to generate text adversarial examples. Experiments have proved that although DeepWordBug can generate text adversarial examples with a high success rate, it will introduce grammatical errors and can be easily defended by grammar detection&#x20;tools.</p>
<p>Vijayaraghavan et&#x20;al. [<xref ref-type="bibr" rid="B134">134</xref>] proposed an Adversarial Examples Generator (AEG) based on reinforcement learning to craft none-target text adversarial examples, according to authors, they evaluated the effectiveness of the AEG algorithm on two target sentiment analysis convolutional neural networks: CNN-Word and CNN-Char, the experiment showed that the AEG model was able to fool the target sentiment analysis models with high success rates while preserving the semantics of the original&#x20;text.</p>
<p>Li et&#x20;al. [<xref ref-type="bibr" rid="B138">138</xref>] also proposed a word-level text adversarial examples generate algorithm named BERT-Attack. According to the authors, firstly, different from [<xref ref-type="bibr" rid="B133">133</xref>], they tried to find the vulnerable words in a sentence by masking each word and the query the target model for correct label. Then they used a pre-trained model Bert to replace vulnerable vocabulary with grammatically correct and semantically similar words. The process of calculating the vulnerability of each word can be expressed as:<disp-formula id="e7">
<mml:math id="m22">
<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x5c;</mml:mo>
<mml:mo>\</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(7)</label>
</disp-formula>where <inline-formula id="inf16">
<mml:math id="m23">
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is target machine learning model, <inline-formula id="inf17">
<mml:math id="m24">
<mml:mi>S</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the input text, and <inline-formula id="inf18">
<mml:math id="m25">
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x5c;</mml:mo>
<mml:mo>\</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>[</mml:mo>
<mml:mi>MASK</mml:mi>
<mml:mo>]</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:math>
</inline-formula> is the text that replace the <italic>w</italic>
<sub>
<italic>i</italic>
</sub> with [<italic>MASK</italic>].</p>
<p>Based on multiple modification strategies, Nuo et&#x20;al. [<xref ref-type="bibr" rid="B141">141</xref>] proposed a black-box Chinese text adversarial example generate method named WordChange. Similar to the algorithm for calculating <italic>TS</italic> in <xref ref-type="disp-formula" rid="e6">Eq. 6</xref>, they search for keywords by gradually deleting a certain vocabulary in the sentence and then querying whether the output of the model has changed, and then applying &#x201c;insert&#x201d; and &#x201c;swap&#x201d; strategies on these keywords, thereby generating Chinese text adversarial examples that can fool the machine learning&#x20;model.</p>
<p>Jin et&#x20;al. [<xref ref-type="bibr" rid="B139">139</xref>] proposed a text adversarial examples generate method named TextFooler, which craft adversarial examples by finding the words that have the greatest impact on the output of the target model in the whole sentence and replacing it with words that share similar meanings with the original words. Although the replaced words in the adversarial examples generated by TextFooler are similar to the original words, it may not fit overall sentence semantics. To make the text adversarial examples more natural and free of grammatical errors, Similar to [<xref ref-type="bibr" rid="B138">138</xref>] Garg et&#x20;al. [<xref ref-type="bibr" rid="B143">143</xref>] proposed a text adversarial example generation algorithm named BAE. According to the authors, firstly, they calculate the importance of each word in the text, and then choose a certain word and replace it with MASK or insert a MASK adjacent to it according to the importance of each word. Finally, they use the pre-trained language model BERT-MLM [<xref ref-type="bibr" rid="B183">183</xref>] to replace the mask with a word that fits the context. Similar to BAE [<xref ref-type="bibr" rid="B143">143</xref>], Li et&#x20;al. [<xref ref-type="bibr" rid="B142">142</xref>] also introduced a pre-trained language model based text adversarial example generation algorithm named CLARE (ContextuaLized AdversaRial Example). Compared with BAE [<xref ref-type="bibr" rid="B143">143</xref>], CLARE has richer attack strategies and can generate text adversarial examples with varied lengths. Experiment showed that the text adversarial examples generated by BAE [<xref ref-type="bibr" rid="B143">143</xref>] and CLARE [<xref ref-type="bibr" rid="B142">142</xref>] were more fluent, natural and grammatical.</p>
<p>To attack text neural networks in hard label black-box setting where the attacker can only get the label output by the target model, Maheshwary et&#x20;al. [<xref ref-type="bibr" rid="B144">144</xref>] utilized a Genetic Algorithm (GA) to craft text adversarial examples that share similar semantics with the original text. Experimental results show that on sentiment analysis tasks, their method can generate text adversarial examples with a higher success rate using smaller perturbation than algorithms such as TextFooler [<xref ref-type="bibr" rid="B139">139</xref>], PSO (Particle Swarm Optimization) [<xref ref-type="bibr" rid="B145">145</xref>], AEG [<xref ref-type="bibr" rid="B134">134</xref>],&#x20;etc.</p>
<p>In addition, different from generating examples by replacing some words or characters in the text, Ren et&#x20;al. [<xref ref-type="bibr" rid="B136">136</xref>] introduced a white-box text adversarial example generate model to generate text adversarial examples on large scale without inputting the original text, and their model is composed of a vanilla VAE-based generator and a series of discriminators. The generator is used to generate text adversarial examples, and the discriminators are used to make the adversarial examples of different labels crafted by <italic>G</italic> look more realistic. Their experiment showed that the proposed model could deceive the target neural network with high confidence.</p>
</sec>
<sec id="s4-1-2">
<title>4.1.2 Defense Against Adversarial Attacks</title>
<p>The current defense strategies for text adversarial examples are mainly divided into two aspects: adversarial example processing and model robustness enhancement. The adversarial example processing method mainly includes identifying the adversarial examples by detecting the misspellings and unknown words contained in the text and performing partial vocabulary replacement of the adversarial examples to convert them into clean text; The model robustness enhancement method enhances the model&#x2019;s defense ability against adversarial examples through methods such as adversarial training and formal verification.</p>
<sec id="s4-1-2-1">
<title>4.1.2.1 Adversarial Example Processing</title>
<p>Adversarial example detection is an important way to detect adversarial examples in sentiment analysis and spam detection. Pruthi et&#x20;al. [<xref ref-type="bibr" rid="B160">160</xref>] proposed RNN-based word recognizers to detect adversarial examples by detecting misspellings in the sentences, but it is hard to defend word-level attacks. By calculating the influence of words in texts, Wang et&#x20;al. [<xref ref-type="bibr" rid="B170">170</xref>] proposed a general text adversarial examples detection algorithm named TextFirewall. They used it to defend the adversarial attacks from Deepwordbug [<xref ref-type="bibr" rid="B133">133</xref>], Genetic attack [<xref ref-type="bibr" rid="B140">140</xref>], and PWWS (Probability Weighted Word Saliency) [<xref ref-type="bibr" rid="B164">164</xref>], and the average attack success rate decreased on Yelp and IMDB are 0.73 and 0.63%, respectively. Mozes et&#x20;al. [<xref ref-type="bibr" rid="B168">168</xref>] also proposed adversarial example detection method named FGWS (Frequency-Guided Word Substitutions), and they tried to detect text adversarial example with the frequency properties of adversarial words and achieved a higher F1 score than DISP [<xref ref-type="bibr" rid="B162">162</xref>] in SST-2 and IMDB dataset. Besides, Wang et&#x20;al. [<xref ref-type="bibr" rid="B165">165</xref>] also proposed a framework named MUDE (Multi-Level Dependencies) to detect adversarial word by taking advantage of both character and word level dependencies.</p>
<p>Zhou et&#x20;al. [<xref ref-type="bibr" rid="B162">162</xref>] also introduced a framework named DISP (Discriminate Perturbations) to transform the text adversarial examples into clean text data. According to the authors, firstly, they identified the perturbed tokens in the input text with a perturbation discriminator, and then replaced the perturbed token with an embedding estimator. Finally, they recovered these tokens into a clean text with a KNN(k-nearest neighbors) algorithm. The experiment indicated that the DISP was outperforming the Adversarial Data Augmentation (ADA), Adversarial Training (AT), and Spelling Correction (SC) in terms of the efficiency and semantic integrity of the text adversarial examples.</p>
</sec>
<sec id="s4-1-2-2">
<title>4.1.2.2 Model Robustness Enhancement</title>
<p>As mentioned above, the algorithms to enhance the robustness of the NLP model mainly include adversarial training and formal verification. In terms of adversarial training, Si et&#x20;al. [<xref ref-type="bibr" rid="B163">163</xref>] introduced a method named AMDA (Adversarial and Mixup Data Augmentation) to cover the larger proportion of the attack space during the process of adversarial training by crafting large amount of augmented training adversarial examples and feeding them to the machine learning model. They used AMDA to defend against attacks from PPWS [<xref ref-type="bibr" rid="B164">164</xref>] and TextFooler [<xref ref-type="bibr" rid="B139">139</xref>] on the data sets SST-2, AG News and IMBD, and achieved significant robustness gains in both Targeted Attack Evaluation (TAE) and Static Attack Evaluation (SAE). For large pre-training model BERT, Karimi et&#x20;al. [<xref ref-type="bibr" rid="B171">171</xref>] introduced a method named BAT to fine-tuned the BERT model by using normal and adversarial text at the same time to obtain a model with better robustness and generalization ability. The experiment indicated that the BERT model trained with BAT was more robust than the traditional BERT model in aspect-based sentiment analysis&#x20;task.</p>
<p>In terms of formal verification, Jia et&#x20;al. [<xref ref-type="bibr" rid="B161">161</xref>] proposed certified robustness training by using interval bound propagation to minimize the upper bound on the worst-case loss. Facts have proved that this method can effectively resist word substitution attacks from Genetic attack [<xref ref-type="bibr" rid="B140">140</xref>]. Shi et&#x20;al. [<xref ref-type="bibr" rid="B166">166</xref>] proposed a transformers robustness verify method to verify the robustness transformers network, compared with the interval boundary propagation algorithm, their method could achieve much tighter certified robustness bounds. Ye et&#x20;al. [<xref ref-type="bibr" rid="B167">167</xref>] proposed a structure-free certified robustness framework named SAFER, which only needs to query the output of the target model when verifying its robustness, so it can be applied to neural network models with any structure, but it is only suitable for word substitutions attacks. Zeng et&#x20;al. [<xref ref-type="bibr" rid="B169">169</xref>] proposed a smoothing-based certified defense method named RanMASK, it could defend against both defense method against both the character and word substitution-based attacks.</p>
</sec>
</sec>
</sec>
<sec id="s4-2">
<title>4.2 Security in Social Recommendation System</title>
<sec id="s4-2-1">
<title>4.2.1 Adversarial Attacks</title>
<p>The poisoning attack affects the recommendation list of the target recommendation system by feeding fake users into the recommendation system, which has occupies the dominant position in adversarial attacks against machine learning-based recommendation systems.</p>
<p>Yang [<xref ref-type="bibr" rid="B146">146</xref>] performed promotion and demotion poisoning attacks by taking attacks as constrained linear optimization problems, and they verified their method on real social network recommendation systems, such as YouTube, eBay, Amazon, Yelp, etc., and achieved a high success attack rate. Similar to Yang [<xref ref-type="bibr" rid="B146">146</xref>], Fang et&#x20;al. [<xref ref-type="bibr" rid="B147">147</xref>] also formulates the poisoning attacks to graph-based recommendation system as an optimization problem, and performs poison attacks by solving these optimization problems. Christakopoulou et&#x20;al. [<xref ref-type="bibr" rid="B148">148</xref>] proposed a two-step white-box poisoning attack framework to fool the machine learning-based recommendation system. Firstly, they utilize a GAN network to generate faker users, and then craft the profiles of fake users with projected gradient method. Fang et&#x20;al. [<xref ref-type="bibr" rid="B152">152</xref>] performed attacks to matrix factorization based social recommendation system by optimizing the ratings of a fake user with a subset of influential users. Huang et&#x20;al. [<xref ref-type="bibr" rid="B158">158</xref>] also tried to poison the deep learning based recommendation system by maximizing the hit rate of a certain item appearance in the top-n recommendation list predicted by target recommendation system.</p>
<p>To effectively generate fake user profile with strong attack power for poisoning attacks, Wu et&#x20;al. [<xref ref-type="bibr" rid="B159">159</xref>] introduced a flexible poisoning framework named TrialAttack, the TrialAttack is based on GAN network and consists of three parts: generator <italic>G</italic>, influence module <italic>I</italic>, and discriminator <italic>D</italic>, the generator is used to generate fake user profile that is close to the real user and has attack influence, the influence model is used to guide the generator to generate fake users profile with greater influence, and the discriminator is used to distinguish the faker profiles generated by the generator from the&#x20;real.</p>
<p>The above attack methods are all white-box-based attack algorithms, that is, the attacker needs to fully understand the parameter information of the target model, but this is unrealistic to the recommendation system in the real social network. In terms of black-box attacks, Fan et&#x20;al. [<xref ref-type="bibr" rid="B155">155</xref>] introduced a framework named CopyAttack to perform a black-box adversarial attack to recommendation system in social network, they used reinforcement learning algorithms to select users from the original domain and inject them into the target domain to improve the hit rate of the target item in the top-n recommendation&#x20;list.</p>
<p>Song et&#x20;al. [<xref ref-type="bibr" rid="B150">150</xref>] proposed an adaptive data poisoning framework named PoisonRec, it leverages reinforcement learning to inject false user data into the recommendation system, which can automatically learn effective attack strategies for various recommendation systems with very limited knowledge.</p>
<p>To attack the graph embedding models with limited knowledge, Chang et&#x20;al. [<xref ref-type="bibr" rid="B151">151</xref>] introduced an adversarial attacker framework named GF-Attack, which formulated the graph neural network as a general graph signal processing with corresponding graph filters, and then attacked the graph filters through the feature matrix and adjacency matrix. To minimize the modification of the original graph data in the attack, Finkelshtein et&#x20;al. [<xref ref-type="bibr" rid="B157">157</xref>] introduced a single-node attack to perform adversarial attack to graph neural networks, which could fool the target model by only modifying a single arbitrary node in the&#x20;graph.</p>
</sec>
<sec id="s4-2-2">
<title>4.2.2 Defense Against Adversarial Attacks</title>
<p>The current defense algorithms for recommendation systems are mainly divided into two aspects: model robustness enhancement and abnormal data detection. Among them, model robustness enhancement is based on adversarial training, and abnormal data detection improves the robustness of the recommendation systems by recognizing pollution&#x20;data.</p>
<p>In terms of adversarial training, Tang et&#x20;al. [<xref ref-type="bibr" rid="B176">176</xref>] proposed an adversarial training method named AMR (Adversarial Multimedia Recommendation) to defend against adversarial attack. According to the authors, the process of adversarial training could be interpreted as playing a minimax game. On the one hand, continuously generate perturbations that can maximize the loss function of target model. On the other hand, continuously optimize the parameters of target model to identify these perturbations.</p>
<p>By combining knowledge distillation with adversarial training, Du et&#x20;al. [<xref ref-type="bibr" rid="B174">174</xref>] produced a more robust collaborative filtering model based on neural network to defend against adversarial attacks. The experiments indicated that their model can effectively enhance the robustness of the recommendation system under the attack of the C&#x26;W [<xref ref-type="bibr" rid="B88">88</xref>] algorithm.</p>
<p>Manotumruksa et&#x20;al. [<xref ref-type="bibr" rid="B177">177</xref>] also proposed a recommendation system robust enhancement framework named SAO (Sequential-based Adversarial Optimization) to enhance the robustness of the recommendation system by generating a sequence of adversarial perturbations and adding it into the training set during the training process.</p>
<p>Li et&#x20;al. [<xref ref-type="bibr" rid="B178">178</xref>] introduced a framework named SACRA (Self-Attentive prospective Customer Recommendation Framework) to perform prospective customer recommendation. Similar to Manotumruksa [<xref ref-type="bibr" rid="B177">177</xref>], the SACRA enhances its robust by adding adversarial perturbation into the training set dynamically to make the recommend system immune to these perturbations.</p>
<p>Wu et&#x20;al. [<xref ref-type="bibr" rid="B181">181</xref>] used the influence function proposed by Koh et&#x20;al. [<xref ref-type="bibr" rid="B184">184</xref>] to craft fake users, and then injected these fake users into the training set to enhance the robustness of the recommendation system. They named their method as adversarial poisoning training (APT), they used five poisoning attack algorithms to evaluate the effectiveness of the APT. The experiment indicated that APT can enhance the robustness of the recommendation system effectively.</p>
<p>By combining the advantages of adversarial training and VAE (Variational Auto-Encoder), Yi et&#x20;al. [<xref ref-type="bibr" rid="B182">182</xref>] proposed a robust recommendation model named DAVE (Dual Adversarial Variational Embedding), which is composed of User Adversarial Embedding (UserAVE), User Adversarial Embedding (ItemAVE) and Neural Collaborative Filtering Network, UserAVE and ItemAVE generate user and item embedding according to user interaction vector and item interaction vector, respectively. Then the user and item embedding are fed into the Collaborative Filtering Network to predict and recommend results. During the training process of the DAVE, it reduces the impact of adversarial perturbation by adaptively generating a unique embedding distribution for each user and&#x20;item.</p>
<p>In terms of abnormal data detection, Shahrasbi et&#x20;al. [<xref ref-type="bibr" rid="B180">180</xref>] proposed a GAN-based pollution data detection method. According to the authors, firstly, they convert the clean user session data to embedding sequences with a Doc2Vec language model. Then, during the training process of GAN, the generator is trained to learn the distribution of real embedding sequences, and the discriminator is trained to learn the distinguish the real embedding sequences and the sequence generated by the generator. Based on this, when the training of GAN network is completed, the pollution data can be identified from the whole dataset.</p>
</sec>
</sec>
<sec id="s4-3">
<title>4.3 Security in Other Aspects of Social Networks</title>
<p>In this subsection, we mainly review some research on adversarial examples from the aspects of question and answer robot and neural machine translation.</p>
<sec id="s4-3-1">
<title>4.3.1 Question and Answer Robot</title>
<p>Xue et&#x20;al. [<xref ref-type="bibr" rid="B185">185</xref>] introduced a text adversarial example craft method named DPAGE (Dependency Parse-based Adversarial Examples Generation) to perform black-box adversarial attack to Q&#x26;A robots. They extract the keywords of the sentence based on the dependency relation of the sentences and then replace these keywords with the adversarial word that are similar to these keywords to craft adversarial questions. They evaluated the performance of DPAGE with two Q&#x26;A robots: DrQA and Google Assistant, and the results shown that the adversarial examples crafted by DPAGE cannot affect both the correct answer and the top-k candidate answers output by the Q&#x26;A robot. Similar to [<xref ref-type="bibr" rid="B185">185</xref>] Deng et&#x20;al. [<xref ref-type="bibr" rid="B186">186</xref>] proposed a method named APE (Attention weight Probability Estimation) to extract keywords from the dialogue and fool the target Q&#x26;A system by replaced these keywords with synonyms. The experiment results show that their method can attack the Q&#x26;A system with a high success&#x20;rate.</p>
</sec>
<sec id="s4-3-2">
<title>4.3.2 Neural Machine Translation</title>
<p>The NMT model is also vulnerable to attacks from adversarial examples. Ebrahimi et&#x20;al. [<xref ref-type="bibr" rid="B187">187</xref>] proposed a white-box gradient-based optimization text adversarial example generation method to perform targeted adversarial attacks to NMT models. The experiment results have shown that their method can attack the target NMT model with a high success rate, and the robustness of the model can improve significantly after robust training. Besides, in the study of poison attacks, Wang et&#x20;al. [<xref ref-type="bibr" rid="B188">188</xref>] can successfully implement the poison attack by injecting only 0.02% of the total data into the data&#x20;set.</p>
<p>To enhance the robustness of the NMT model, Cheng et&#x20;al. [<xref ref-type="bibr" rid="B189">189</xref>] craft text adversarial examples with a white-box gradient-based method and then used it to enhance the robustness of the model. Experiments on English-German and Chinese-English translation tasks have shown that their method can significantly improve the robustness and performance of the NMT model. In another study by Cheng et&#x20;al. [<xref ref-type="bibr" rid="B190">190</xref>], they also try to enhance the robustness of the NMT models by augmenting the training data with an adversarial augmentation technique.</p>
</sec>
</sec>
</sec>
<sec id="s5">
<title>5 Discussion and Conclusion</title>
<sec id="s5-1">
<title>5.1 Discussion</title>
<p>Although the generation and defense algorithms of adversarial examples have made great achievements on unstructured data in social networks, there are still many key issues that have not been resolved.</p>
<sec id="s5-1-1">
<title>5.1.1 Constraints for Attacks on Real Systems</title>
<p>Many adversarial example generation algorithms in social networks do not consider the restrictions on attacks on real systems. In terms&#x20;of text adversarial generation, many studies [<xref ref-type="bibr" rid="B133">133</xref>, <xref ref-type="bibr" rid="B138">138</xref>, <xref ref-type="bibr" rid="B139">139</xref>, <xref ref-type="bibr" rid="B141">141</xref>] try to get the keywords in the sentence by frequently querying&#x20;the target&#x20;model, however, the action of the frequent query is easy to be&#x20;detected and defended when the attack is performed on the real&#x20;system. In terms of adversarial example generation in the recommendation system, the attacker poisons the recommendation system by modifying the edge and attribute information of some nodes in the social network graph [<xref ref-type="bibr" rid="B146">146</xref>, <xref ref-type="bibr" rid="B147">147</xref>, <xref ref-type="bibr" rid="B150">150</xref>, <xref ref-type="bibr" rid="B151">151</xref>], however, in the real social network, the node that the attacker chooses to modify may be a node that is not controlled by the attacker. Therefore, in the subsequent research on adversarial example generation algorithms in the field of social networks, more consideration should be given to the limitations in real scenarios.</p>
</sec>
<sec id="s5-1-2">
<title>5.1.2 The Security of Social Network Data</title>
<p>To adapt to the complex and changeable user structure on social networks, the rapid change and short timeliness of cyber language, the AI models applied to social networks need to frequently fine-tune its parameters based on real data from social networks. Therefore, poisoning attacks must be effectively avoided during the process of online training. Since adversarial examples are usually difficult to find visually, on the one hand, there is currently little research on adversarial examples detection for unstructured data such as graphs. On the other hand, with the continuous evolution of attack methods, the existing data enhancement and cleaning technologies cannot effectively detect malicious data in all data. Therefore, how to accurately detect poisoning data in social networks will become a focus of future research.</p>
</sec>
<sec id="s5-1-3">
<title>5.1.3 Robustness Evaluation of Social Network Models</title>
<p>Due to the poor interpretability of machine learning algorithms, it is difficult to analyze and prove the robustness of machine learning mathematically. Therefore, the current robustness evaluation of machine learning algorithms mainly depends on the defensive ability of specific adversarial attacks, however, the robustness conclusions obtained by this method are difficult to apply to the latest attack algorithms. In the field of&#x20;computer vision, some researches [<xref ref-type="bibr" rid="B130">130</xref>, <xref ref-type="bibr" rid="B131">131</xref>] have tried to use&#x20;formal verification to analyze the robustness of machine learning algorithms. In terms of social networks, some researches [<xref ref-type="bibr" rid="B161">161</xref>, <xref ref-type="bibr" rid="B166">166</xref>, <xref ref-type="bibr" rid="B167">167</xref>, <xref ref-type="bibr" rid="B169">169</xref>] also try to use formal verification algorithms to analyze the robustness of text classification machine learning models, but it is great affected by the model structure and data&#x20;types, in terms of recommendation systems, the research on the robustness analysis for machine learning algorithm is still&#x20;blank. Therefore, the robustness analysis for machine learning algorithm will also be a research focus in social networks.</p>
</sec>
</sec>
<sec id="s5-2">
<title>5.2 Conclusion</title>
<p>Although machine learning algorithms have made significant developments in many fields, it cannot be ignored that machine learning algorithms are vulnerable to attacks from adversarial examples. That is, adding perturbations that are not detectable by the human eye to the original data may cause the machine learning algorithm to make a completely different output with a high probability. In this paper, we review the application of machine learning algorithms in the field of social networks from aspects of sentiment analysis, recommendation systems, and spam detection, as well as the research progress of the generation of adversarial examples and defense algorithms in social networks.</p>
<p>Although the data processed by machine learning models that are used in social networks is usually unstructured data such as text or graphs, the adversarial example generation algorithm for images in the field of computer vision is also applicable to unstructured data such as text and graphs after extension. Therefore, how to use machine learning algorithms to implement various functions in social networks while ensuring the robustness of the algorithm itself is one of the hotspots for studying. Besides, to improve the robustness of machine learning algorithms in the field of the social network, in terms of adversarial example generation, more focus should be put on the adversarial example generation algorithms that can be applied to real online social network machine learning models, so as to enhance the robustness of online machine learning models. In terms of adversarial example defense, while strengthening robustness against specific attacks, more research on active defense algorithms such as certified training should also be carried out to defend against adversarial attacks.</p>
</sec>
</sec>
</body>
<back>
<sec id="s6">
<title>Author Contributions</title>
<p>SG: Investigation, Analysis of papers, Drafting the manuscript, Review; XL: Review, Editing; ZM: Review, Editing.</p>
</sec>
<sec id="s7">
<title>Funding</title>
<p>This work was supported by National Key R&#x26;D Program of China (Grant No. 2020AAA0107700), Shenzhen Fundamental Research Program (Grant No. 20210317191843003), the Shaanxi Provincial Key R&#x26;D Program (Grant No. 2021ZDLGY05-01), the Natural Science Basic Research Plan in Shaanxi Province of China (Grant No. 2020JQ-214).</p>
</sec>
<sec sec-type="COI-statement" id="s8">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s9">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Clark</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Algoe</surname>
<given-names>SB</given-names>
</name>
<name>
<surname>Green</surname>
<given-names>MC</given-names>
</name>
</person-group>. <article-title>Social Network Sites and Well-Being: the Role of Social Connection</article-title>. <source>Curr Dir Psychol Sci</source> (<year>2018</year>) <volume>27</volume>(<issue>1</issue>):<fpage>32</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1177/0963721417730833</pub-id> </citation>
</ref>
<ref id="B2">
<label>2.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Zhan</surname>
<given-names>X-X</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>G-Q</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z-K</given-names>
</name>
</person-group>. <article-title>Markov-based Solution for Information Diffusion on Adaptive Social Networks</article-title>. <source>Appl Maths Comput</source> (<year>2020</year>) <volume>380</volume>:<fpage>125286</fpage>. <pub-id pub-id-type="doi">10.1016/j.amc.2020.125286</pub-id> </citation>
</ref>
<ref id="B3">
<label>3.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H-T</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Z</given-names>
</name>
</person-group>. <article-title>Analysis of Transmission Dynamics for Zika Virus on Networks</article-title>. <source>Appl Maths Comput</source> (<year>2019</year>) <volume>347</volume>:<fpage>566</fpage>&#x2013;<lpage>77</lpage>. <pub-id pub-id-type="doi">10.1016/j.amc.2018.11.042</pub-id> </citation>
</ref>
<ref id="B4">
<label>4.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhan</surname>
<given-names>X-X</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z-K</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>G-Q</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>JJH</given-names>
</name>
<etal/>
</person-group> <article-title>Coupling Dynamics of Epidemic Spreading and Information Diffusion on Complex Networks</article-title>. <source>Appl Maths Comput</source> (<year>2018</year>) <volume>332</volume>:<fpage>437</fpage>&#x2013;<lpage>48</lpage>. <pub-id pub-id-type="doi">10.1016/j.amc.2018.03.050</pub-id> </citation>
</ref>
<ref id="B5">
<label>5.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Han</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Yao</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>H-C</given-names>
</name>
<name>
<surname>Deb</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Tang</surname>
<given-names>J-L</given-names>
</name>
<etal/>
</person-group> <article-title>Adversarial Attacks and Defenses in Images, Graphs and Text: A Review</article-title>. <source>Int J&#x20;Automation Comput</source> (<year>2020</year>) <volume>17</volume>(<issue>2</issue>):<fpage>151</fpage>&#x2013;<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1007/s11633-019-1211-x</pub-id> </citation>
</ref>
<ref id="B6">
<label>6.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guo</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Duan</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Mu</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Xiao</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>A Black-Box Attack Method against Machine-Learning-Based Anomaly Network Flow Detection Models</article-title>. <source>Security Commun Networks</source> (<year>2021</year>) <volume>2021</volume>:<fpage>1</fpage>&#x2013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1155/2021/5578335</pub-id> </citation>
</ref>
<ref id="B7">
<label>7.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>WE</given-names>
</name>
<name>
<surname>QuanSheng</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>F Alhazmi</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>C</given-names>
</name>
</person-group>. <source>Generating Textual Adversarial Examples for Deep Learning Models: A Survey</source> (<year>2019</year>). <comment>arXiv preprint arXiv:1901.06796</comment>. </citation>
</ref>
<ref id="B8">
<label>8.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Yao</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Carlini</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Cottrell</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Goodfellow</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Raffel</surname>
<given-names>C</given-names>
</name>
</person-group>. <article-title>Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition</article-title>. In: <conf-name>36th International Conference on Machine Learning, ICML 2019</conf-name>. <publisher-loc>Long Beach, CA, United states</publisher-loc>: <publisher-name>PMLR</publisher-name> (<year>2019</year>). p. <fpage>5231</fpage>&#x2013;<lpage>40</lpage>. </citation>
</ref>
<ref id="B9">
<label>9.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Han</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Z</given-names>
</name>
</person-group>. <article-title>A Weighted Network Community Detection Algorithm Based on Deep Learning</article-title>. <source>Appl Maths Comput</source> (<year>2021</year>) <volume>401</volume>:<fpage>126012</fpage>. <pub-id pub-id-type="doi">10.1016/j.amc.2021.126012</pub-id> </citation>
</ref>
<ref id="B10">
<label>10.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Tian</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Z</given-names>
</name>
</person-group>. <article-title>Functional Immunization of Networks Based on Message Passing</article-title>. <source>Appl Maths Comput</source> (<year>2020</year>) <volume>366</volume>:<fpage>124728</fpage>. <pub-id pub-id-type="doi">10.1016/j.amc.2019.124728</pub-id> </citation>
</ref>
<ref id="B11">
<label>11.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Han</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Tian</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Jia</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Topic Representation Model Based on Microblogging Behavior Analysis</article-title>. <source>World Wide Web</source> (<year>2020</year>) <volume>23</volume>(<issue>6</issue>):<fpage>3083</fpage>&#x2013;<lpage>97</lpage>. <pub-id pub-id-type="doi">10.1007/s11280-020-00822-x</pub-id> </citation>
</ref>
<ref id="B12">
<label>12.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nie</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Jia</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>B</given-names>
</name>
</person-group>. <article-title>Identifying Users across Social Networks Based on Dynamic Core Interests</article-title>. <source>Neurocomputing</source> (<year>2016</year>) <volume>210</volume>:<fpage>107</fpage>&#x2013;<lpage>15</lpage>. </citation>
</ref>
<ref id="B13">
<label>13.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Al-Smadi</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Qawasmeh</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Al-Ayyoub</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Jararweh</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Gupta</surname>
<given-names>B</given-names>
</name>
</person-group>. <article-title>Deep Recurrent Neural Network vs. Support Vector Machine for Aspect-Based Sentiment Analysis of Arabic Hotels&#x27; Reviews</article-title>. <source>J&#x20;Comput Sci</source> (<year>2018</year>) <volume>27</volume>:<fpage>386</fpage>&#x2013;<lpage>93</lpage>. <pub-id pub-id-type="doi">10.1016/j.jocs.2017.11.006</pub-id> </citation>
</ref>
<ref id="B14">
<label>14.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Hitesh</surname>
<given-names>MSR</given-names>
</name>
<name>
<surname>Vaibhav</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Kalki</surname>
<given-names>YJA</given-names>
</name>
<name>
<surname>Kamtam</surname>
<given-names>SH</given-names>
</name>
<name>
<surname>Kumari</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Real-time Sentiment Analysis of 2019 Election Tweets Using Word2vec and Random forest Model</article-title>. In: <conf-name>2019 2nd International Conference on Intelligent Communication and Computational Techniques (ICCT)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2019</year>). p. <fpage>146</fpage>&#x2013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1109/icct46177.2019.8969049</pub-id> </citation>
</ref>
<ref id="B15">
<label>15.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Long</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Ou</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Sentiment Analysis of Text Based on Bidirectional Lstm with Multi-Head Attention</article-title>. <source>IEEE Access</source> (<year>2019</year>) <volume>7</volume>:<fpage>141960</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1109/access.2019.2942614</pub-id> </citation>
</ref>
<ref id="B16">
<label>16.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Djaballah</surname>
<given-names>KA</given-names>
</name>
<name>
<surname>Boukhalfa</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Omar</surname>
<given-names>B</given-names>
</name>
</person-group>. <article-title>Sentiment Analysis of Twitter Messages Using Word2vec by Weighted Average</article-title>. In: <conf-name>2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2019</year>). p. <fpage>223</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1109/snams.2019.8931827</pub-id> </citation>
</ref>
<ref id="B17">
<label>17.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Mikolov</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Sutskever</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Corrado</surname>
<given-names>GS</given-names>
</name>
<name>
<surname>Dean</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>Distributed Representations of Words and Phrases and Their Compositionality</article-title>. In: <source>Advances in Neural Information Processing Systems</source> (<year>2013</year>), <publisher-loc>Lake Tahoe, NV, United states</publisher-loc>. p. <fpage>3111</fpage>&#x2013;<lpage>9</lpage>. </citation>
</ref>
<ref id="B18">
<label>18.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ho</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Ondusko</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Roy</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Hsu</surname>
<given-names>DF</given-names>
</name>
</person-group>. <article-title>Sentiment Analysis on Tweets Using Machine Learning and Combinatorial Fusion</article-title>. In: <conf-name>2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2019</year>). p. <fpage>1066</fpage>&#x2013;<lpage>71</lpage>. <pub-id pub-id-type="doi">10.1109/dasc/picom/cbdcom/cyberscitech.2019.00191</pub-id> </citation>
</ref>
<ref id="B19">
<label>19.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Shen</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Mi</surname>
<given-names>K</given-names>
</name>
</person-group>. <article-title>Attention-based Sentiment Reasoner for Aspect-Based Sentiment Analysis</article-title>. <source>Human-centric Comput Inf Sci</source> (<year>2019</year>) <volume>9</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1186/s13673-019-0196-3</pub-id> </citation>
</ref>
<ref id="B20">
<label>20.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yao</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Domain-specific Sentiment Analysis for Tweets during Hurricanes (Dssa-h): A Domain-Adversarial Neural-Network-Based Approach</article-title>. <source>Comput Environ Urban Syst</source> (<year>2020</year>) <volume>83</volume>:<fpage>101522</fpage>. <pub-id pub-id-type="doi">10.1016/j.compenvurbsys.2020.101522</pub-id> </citation>
</ref>
<ref id="B21">
<label>21.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Umer</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ashraf</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Mehmood</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Kumari</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ullah</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Sang Choi</surname>
<given-names>G</given-names>
</name>
</person-group>. <article-title>Sentiment Analysis of Tweets Using a Unified Convolutional Neural Network&#x2010;long Short&#x2010;term Memory Network Model</article-title>. <source>Comput Intelligence</source> (<year>2021</year>) <volume>37</volume>(<issue>1</issue>):<fpage>409</fpage>&#x2013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.1111/coin.12415</pub-id> </citation>
</ref>
<ref id="B22">
<label>22.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Conneau</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Schwenk</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Barrault</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Lecun</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Very Deep Convolutional Networks for Text Classification</article-title>. In: <conf-name>15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017-Proceedings of Conference</conf-name>. <publisher-loc>Valencia, Spain</publisher-loc> (<year>2016</year>). p. <fpage>1107</fpage>&#x2013;<lpage>16</lpage>. <pub-id pub-id-type="doi">10.18653/v1/e17-1104</pub-id> </citation>
</ref>
<ref id="B23">
<label>23.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Cliche</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>Bb_twtr at Semeval-2017 Task 4: Twitter Sentiment Analysis with Cnns and Lstms</article-title>. In: <conf-name>Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017</conf-name>. <publisher-loc>Vancouver, Canada</publisher-loc>: <publisher-name>Association for Computational Linguistics</publisher-name> (<year>2017</year>). p. <fpage>573</fpage>&#x2013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.18653/v1/S17-2094</pub-id> </citation>
</ref>
<ref id="B24">
<label>24.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lv</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Wei</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Cao</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Peng</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Niu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>S</given-names>
</name>
<etal/>
</person-group> <article-title>Aspect-level Sentiment Analysis Using Context and Aspect Memory Network</article-title>. <source>Neurocomputing</source> (<year>2021</year>) <volume>428</volume>:<fpage>195</fpage>&#x2013;<lpage>205</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2020.11.049</pub-id> </citation>
</ref>
<ref id="B25">
<label>25.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rawat</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Mahor</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Chirgaiya</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Shaw</surname>
<given-names>RN</given-names>
</name>
<name>
<surname>Ghosh</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>Sentiment Analysis at Online Social Network for Cyber-Malicious post Reviews Using Machine Learning Techniques</article-title>. <source>Computationally Intell Syst their Appl</source> (<year>2021</year>) <volume>950</volume>: <fpage>113</fpage>&#x2013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1007/978-981-16-0407-2_9</pub-id> </citation>
</ref>
<ref id="B26">
<label>26.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>L</given-names>
</name>
</person-group>. <article-title>Attention-based Lstm for Aspect-Level Sentiment Classification</article-title>. In: <conf-name>Proceedings of the 2016 conference on empirical methods in natural language processing</conf-name> (<year>2016</year>). p. <fpage>606</fpage>&#x2013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.18653/v1/d16-1058</pub-id> </citation>
</ref>
<ref id="B27">
<label>27.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Pontiki</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Galanis</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Papageorgiou</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Androutsopoulos</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Manandhar</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Al-Smadi</surname>
<given-names>M</given-names>
</name>
<etal/>
</person-group> <article-title>Semeval-2016 Task 5: Aspect Based Sentiment Analysis</article-title>. In: <conf-name>International workshop on semantic evaluation</conf-name> (<year>2016</year>). p. <fpage>19</fpage>&#x2013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.18653/v1/s16-1002</pub-id> </citation>
</ref>
<ref id="B28">
<label>28.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Fan</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Yao</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Tang</surname>
<given-names>J</given-names>
</name>
<etal/>
</person-group> <article-title>Graph Neural Networks for Social Recommendation</article-title>. In: <conf-name>The World Wide Web Conference</conf-name> (<year>2019</year>). p. <fpage>417</fpage>&#x2013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1145/3308558.3313488</pub-id> </citation>
</ref>
<ref id="B29">
<label>29.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>van den Berg</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Kipf</surname>
<given-names>TN</given-names>
</name>
<name>
<surname>Welling</surname>
<given-names>M</given-names>
</name>
</person-group>. <source>Graph Convolutional Matrix Completion</source> (<year>2017</year>). <comment>arXiv preprint arXiv:1706.02263</comment>. </citation>
</ref>
<ref id="B30">
<label>30.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Fan</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>Deep Modeling of Social Relations for Recommendation</article-title>. In: <source>Thirty-Second AAAI Conference on Artificial Intelligence</source> (<year>2018</year>). </citation>
</ref>
<ref id="B31">
<label>31.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>He</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Nie</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Chua</surname>
<given-names>T-S</given-names>
</name>
</person-group>. <article-title>Neural Collaborative Filtering</article-title>. In: <conf-name>Proceedings of the 26th international conference on world wide web</conf-name> (<year>2017</year>). p. <fpage>173</fpage>&#x2013;<lpage>82</lpage>. <pub-id pub-id-type="doi">10.1145/3038912.3052569</pub-id> </citation>
</ref>
<ref id="B32">
<label>32.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Gui</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Peng</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>Y</given-names>
</name>
<etal/>
</person-group> <article-title>Mention Recommendation in Twitter with Cooperative Multi-Agent Reinforcement Learning</article-title>. In: <conf-name>Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval</conf-name> (<year>2019</year>). p. <fpage>535</fpage>&#x2013;<lpage>44</lpage>. <pub-id pub-id-type="doi">10.1145/3331184.3331237</pub-id> </citation>
</ref>
<ref id="B33">
<label>33.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guo</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>H</given-names>
</name>
</person-group>. <article-title>A Deep Graph Neural Network-Based Mechanism for Social Recommendations</article-title>. <source>IEEE Trans Ind Inform</source> (<year>2020</year>) <volume>17</volume>(<issue>4</issue>):<fpage>2776</fpage>&#x2013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1109/tii.2020.2986316</pub-id> </citation>
</ref>
<ref id="B34">
<label>34.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Massa</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Avesani</surname>
<given-names>P</given-names>
</name>
</person-group>. <article-title>Controversial Users Demand Local Trust Metrics: An&#x20;Experimental Study on Epinions. Com Community</article-title>. <source>AAAI</source> (<year>2005</year>) <volume>5</volume>:<fpage>121</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.5555/1619332.1619354</pub-id> </citation>
</ref>
<ref id="B35">
<label>35.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Hegde</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Satyappanavar</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Setty</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Restaurant Setup Business Analysis Using Yelp Dataset</article-title>. In: <conf-name>2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2017</year>). p. <fpage>2342</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1109/icacci.2017.8126196</pub-id> </citation>
</ref>
<ref id="B36">
<label>36.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Steck</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>On Top-K Recommendation Using Social Networks</article-title>. In: <conf-name>Proceedings of the sixth ACM conference on Recommender systems</conf-name> (<year>2012</year>). p. <fpage>67</fpage>&#x2013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1145/2365952.2365969</pub-id> </citation>
</ref>
<ref id="B37">
<label>37.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Jamali</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ester</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>A Matrix Factorization Technique with Trust Propagation for Recommendation in Social Networks</article-title>. In: <conf-name>Proceedings of the fourth ACM conference on Recommender systems</conf-name> (<year>2010</year>). p. <fpage>135</fpage>&#x2013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1145/1864708.1864736</pub-id> </citation>
</ref>
<ref id="B38">
<label>38.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Guo</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yorke-Smith</surname>
<given-names>N</given-names>
</name>
</person-group>. <article-title>Trustsvd: Collaborative Filtering with Both the Explicit and Implicit Influence of User Trust and of Item Ratings</article-title>. In: <conf-name>Proceedings of the AAAI Conference on Artificial Intelligence, volume 29</conf-name> (<year>2015</year>). </citation>
</ref>
<ref id="B39">
<label>39.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Lei</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Social Collaborative Filtering by Trust</article-title>. <source>IEEE Trans Pattern Anal Mach Intell</source> (<year>2016</year>) <volume>39</volume>(<issue>8</issue>):<fpage>1633</fpage>&#x2013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2016.2605085</pub-id> </citation>
</ref>
<ref id="B40">
<label>40.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Sedhain</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Menon</surname>
<given-names>AK</given-names>
</name>
<name>
<surname>Scott</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Xie</surname>
<given-names>L</given-names>
</name>
</person-group>. <article-title>Autorec: Autoencoders Meet Collaborative Filtering</article-title>. In: <conf-name>Proceedings of the 24th international conference on World Wide Web</conf-name> (<year>2015</year>). p. <fpage>111</fpage>&#x2013;<lpage>2</lpage>. </citation>
</ref>
<ref id="B41">
<label>41.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>An Efficient Group Recommendation Model with Multiattention-Based Neural Networks</article-title>. <source>IEEE Trans Neural Netw Learn Syst.</source> (<year>2020</year>) <volume>31</volume>(<issue>11</issue>):<fpage>4461</fpage>&#x2013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1109/tnnls.2019.2955567</pub-id> </citation>
</ref>
<ref id="B42">
<label>42.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Dong</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>Latent Group Recommendation Based on Dynamic Probabilistic Matrix Factorization Model Integrated with Cnn</article-title>. <source>J&#x20;Comput Res Dev</source> (<year>2017</year>) <volume>54</volume>(<issue>8</issue>):<fpage>1853</fpage>. <pub-id pub-id-type="doi">10.7544/issn1000-1239.2017.20170344</pub-id> </citation>
</ref>
<ref id="B43">
<label>43.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Tran</surname>
<given-names>LV</given-names>
</name>
<name>
<surname>Pham</surname>
<given-names>T-AN</given-names>
</name>
<name>
<surname>Tay</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X</given-names>
</name>
</person-group>. <article-title>Interact and Decide: Medley of Sub-attention Networks for Effective Group Recommendation</article-title>. In: <conf-name>Proceedings of the 42nd International ACM SIGIR conference on research and development in information retrieval</conf-name> (<year>2019</year>). p. <fpage>255</fpage>&#x2013;<lpage>64</lpage>. </citation>
</ref>
<ref id="B44">
<label>44.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Cao</surname>
<given-names>D</given-names>
</name>
<name>
<surname>He</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Miao</surname>
<given-names>L</given-names>
</name>
<name>
<surname>An</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Hong</surname>
<given-names>R</given-names>
</name>
</person-group>. <article-title>Attentive Group Recommendation</article-title>. In: <conf-name>The 41st International ACM SIGIR Conference on Research &#x26; Development in Information Retrieval</conf-name> (<year>2018</year>). p. <fpage>645</fpage>&#x2013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1145/3209978.3209998</pub-id> </citation>
</ref>
<ref id="B45">
<label>45.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pan</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>He</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>H</given-names>
</name>
</person-group>. <article-title>A Correlative Denoising Autoencoder to Model Social Influence for Top-N Recommender System</article-title>. <source>Front Comput Sci</source> (<year>2020</year>) <volume>14</volume>(<issue>3</issue>):<fpage>1</fpage>&#x2013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1007/s11704-019-8123-3</pub-id> </citation>
</ref>
<ref id="B46">
<label>46.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>DuBois</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>AX</given-names>
</name>
<name>
<surname>Ester</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>Collaborative Denoising Auto-Encoders for Top-N Recommender Systems</article-title>. In: <conf-name>Proceedings of the ninth ACM international conference on web search and data mining</conf-name> (<year>2016</year>). p. <fpage>153</fpage>&#x2013;<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1145/2835776.2835837</pub-id> </citation>
</ref>
<ref id="B47">
<label>47.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>B</given-names>
</name>
</person-group>. <article-title>Trust-aware Collaborative Filtering with a Denoising Autoencoder</article-title>. <source>Neural Process Lett</source> (<year>2019</year>) <volume>49</volume>(<issue>2</issue>):<fpage>835</fpage>&#x2013;<lpage>49</lpage>. <pub-id pub-id-type="doi">10.1007/s11063-018-9831-7</pub-id> </citation>
</ref>
<ref id="B48">
<label>48.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Zheng</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>L</given-names>
</name>
<etal/>
</person-group> <article-title>Implicit Relation-Aware Social Recommendation with Variational Auto-Encoder</article-title>. <source>World Wide Web</source>. <publisher-name>Springer</publisher-name> (<year>2021</year>). p. <fpage>1</fpage>&#x2013;<lpage>16</lpage>. </citation>
</ref>
<ref id="B49">
<label>49.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Cantador</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Peter</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Kuflik</surname>
<given-names>T</given-names>
</name>
</person-group>. <article-title>Second Workshop on Information Heterogeneity and Fusion in Recommender Systems (Hetrec2011)</article-title>. In: <conf-name>Proceedings of the fifth ACM conference on Recommender systems</conf-name> (<year>2011</year>). p. <fpage>387</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1145/2043932.2044016</pub-id> </citation>
</ref>
<ref id="B50">
<label>50.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Guo</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yorke-Smith</surname>
<given-names>N</given-names>
</name>
</person-group>. <article-title>A Novel Bayesian Similarity Measure for Recommender Systems</article-title>. In: <conf-name>Twenty-third international joint conference on artificial intelligence,</conf-name> (<year>2013</year>). </citation>
</ref>
<ref id="B51">
<label>51.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Guo</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Thalmann</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Yorke-Smith</surname>
<given-names>N</given-names>
</name>
</person-group>. <article-title>Etaf: An Extended Trust Antecedents Framework for Trust Prediction</article-title>. In: <conf-name>2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2014</year>). p. <fpage>540</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1109/asonam.2014.6921639</pub-id> </citation>
</ref>
<ref id="B52">
<label>52.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Social Attentional Memory Network: Modeling Aspect-And Friend-Level Differences in Recommendation</article-title>. In: <conf-name>Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining</conf-name> (<year>2019</year>). p. <fpage>177</fpage>&#x2013;<lpage>85</lpage>. </citation>
</ref>
<ref id="B53">
<label>53.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Liang</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Krishnan</surname>
<given-names>RG</given-names>
</name>
<name>
<surname>Hoffman</surname>
<given-names>MD</given-names>
</name>
<name>
<surname>Jebara</surname>
<given-names>T</given-names>
</name>
</person-group>. <article-title>Variational Autoencoders for Collaborative Filtering</article-title>. In: <conf-name>Proceedings of the 2018 world wide web conference</conf-name> (<year>2018</year>). p. <fpage>689</fpage>&#x2013;<lpage>98</lpage>. <pub-id pub-id-type="doi">10.1145/3178876.3186150</pub-id> </citation>
</ref>
<ref id="B54">
<label>54.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ni</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>An Effective Recommendation Model Based on Deep Representation Learning</article-title>. <source>Inf Sci</source> (<year>2021</year>) <volume>542</volume>:<fpage>324</fpage>&#x2013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1016/j.ins.2020.07.038</pub-id> </citation>
</ref>
<ref id="B55">
<label>55.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Oh</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>H</given-names>
</name>
</person-group>. <article-title>Convolutional Matrix Factorization for Document Context-Aware Recommendation</article-title>. In: <conf-name>Proceedings of the 10th ACM conference on recommender systems</conf-name> (<year>2016</year>). p. <fpage>233</fpage>&#x2013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1145/2959100.2959165</pub-id> </citation>
</ref>
<ref id="B56">
<label>56.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Ni</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>C</given-names>
</name>
</person-group>. <article-title>Multimodal Representation Learning for Recommendation in Internet of Things</article-title>. <source>IEEE Internet Things J</source> (<year>2019</year>) <volume>6</volume>(<issue>6</issue>):<fpage>10675</fpage>&#x2013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1109/jiot.2019.2940709</pub-id> </citation>
</ref>
<ref id="B57">
<label>57.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>Gated Recurrent Units Based Neural Network for Time Heterogeneous Feedback Recommendation</article-title>. <source>Inf Sci</source> (<year>2018</year>) <volume>423</volume>:<fpage>50</fpage>&#x2013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1016/j.ins.2017.09.048</pub-id> </citation>
</ref>
<ref id="B58">
<label>58.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Xiao</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Ye</surname>
<given-names>H</given-names>
</name>
<name>
<surname>He</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Chua</surname>
<given-names>T-S</given-names>
</name>
</person-group>. <article-title>Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks</article-title>. In: <conf-name>Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, (IJCAI-17)</conf-name>. <publisher-loc>Melbourne, VIC, Australia</publisher-loc> (<year>2017</year>). p. <fpage>3119</fpage>&#x2013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.24963/ijcai.2017/435</pub-id> </citation>
</ref>
<ref id="B59">
<label>59.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zeng</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Shang</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Han</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Zeng</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>Racmf: Robust Attention Convolutional Matrix Factorization for Rating Prediction</article-title>. <source>Pattern Anal Applic</source> (<year>2019</year>) <volume>22</volume>(<issue>4</issue>):<fpage>1655</fpage>&#x2013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1007/s10044-019-00814-2</pub-id> </citation>
</ref>
<ref id="B60">
<label>60.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Khattar</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Varma</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Gupta</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>Hram: A Hybrid Recurrent Attention Machine for News Recommendation</article-title>. In: <conf-name>Proceedings of the 27th ACM International Conference on Information and Knowledge Management</conf-name> (<year>2018</year>). p. <fpage>1619</fpage>&#x2013;<lpage>22</lpage>. </citation>
</ref>
<ref id="B61">
<label>61.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Gulla</surname>
<given-names>JA</given-names>
</name>
</person-group>. <article-title>Dynamic Attention-Integrated Neural Network for Session-Based News Recommendation</article-title>. <source>Mach Learn</source> (<year>2019</year>) <volume>108</volume>(<issue>10</issue>):<fpage>1851</fpage>&#x2013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1007/s10994-018-05777-9</pub-id> </citation>
</ref>
<ref id="B62">
<label>62.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tahmasebi</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Ravanmehr</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Mohamadrezaei</surname>
<given-names>R</given-names>
</name>
</person-group>. <article-title>Social Movie Recommender System Based on Deep Autoencoder Network Using Twitter Data</article-title>. <source>Neural Comput Applic</source> (<year>2021</year>) <volume>33</volume>(<issue>5</issue>):<fpage>1607</fpage>&#x2013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1007/s00521-020-05085-1</pub-id> </citation>
</ref>
<ref id="B63">
<label>63.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Behera</surname>
<given-names>DK</given-names>
</name>
<name>
<surname>Das</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Swetanisha</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Predicting Users&#x27; Preferences for Movie Recommender System Using Restricted Boltzmann Machine</article-title>. In: <source>Computational Intelligence in Data Mining</source>. <publisher-name>Springer</publisher-name> (<year>2019</year>). p. <fpage>759</fpage>&#x2013;<lpage>69</lpage>. <pub-id pub-id-type="doi">10.1007/978-981-10-8055-5_67</pub-id> </citation>
</ref>
<ref id="B64">
<label>64.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Polatidis</surname>
<given-names>N</given-names>
</name>
<name>
<surname>GeorgiadisGeorgiadis</surname>
<given-names>CK</given-names>
</name>
<name>
<surname>Pimenidis</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Mouratidis</surname>
<given-names>H</given-names>
</name>
</person-group>. <article-title>Privacy-preserving Collaborative Recommendations Based on Random Perturbations</article-title>. <source>Expert Syst Appl</source> (<year>2017</year>) <volume>71</volume>:<fpage>18</fpage>&#x2013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2016.11.018</pub-id> </citation>
</ref>
<ref id="B65">
<label>65.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Salih Karaka&#x15f;l&#x131;</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Aydin</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Yarkan</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ali</surname>
<given-names>B</given-names>
</name>
</person-group>. <article-title>Dynamic Feature Selection for Spam Detection in Twitter</article-title>. In: <conf-name>International Telecommunications Conference</conf-name>. <publisher-name>Springer</publisher-name> (<year>2019</year>). p. <fpage>239</fpage>&#x2013;<lpage>50</lpage>. </citation>
</ref>
<ref id="B66">
<label>66.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jain</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Sharma</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Agarwal</surname>
<given-names>B</given-names>
</name>
</person-group>. <article-title>Spam Detection in Social media Using Convolutional and Long Short Term Memory Neural Network</article-title>. <source>Ann Math Artif Intell</source> (<year>2019</year>) <volume>85</volume>(<issue>1</issue>):<fpage>21</fpage>&#x2013;<lpage>44</lpage>. <pub-id pub-id-type="doi">10.1007/s10472-018-9612-z</pub-id> </citation>
</ref>
<ref id="B67">
<label>67.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tajalizadeh</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Boostani</surname>
<given-names>R</given-names>
</name>
</person-group>. <article-title>A Novel Stream Clustering Framework for Spam Detection in Twitter</article-title>. <source>IEEE Trans Comput Soc Syst</source> (<year>2019</year>) <volume>6</volume>(<issue>3</issue>):<fpage>525</fpage>&#x2013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.1109/tcss.2019.2910818</pub-id> </citation>
</ref>
<ref id="B68">
<label>68.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhao</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Xin</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>An Attention-Based Graph Neural Network for Spam Bot Detection in Social Networks</article-title>. <source>Appl Sci</source> (<year>2020</year>) <volume>10</volume>(<issue>22</issue>):<fpage>8160</fpage>. <pub-id pub-id-type="doi">10.3390/app10228160</pub-id> </citation>
</ref>
<ref id="B69">
<label>69.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Harkreader</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Shin</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Gu</surname>
<given-names>G</given-names>
</name>
</person-group>. <article-title>Analyzing Spammers&#x2019; Social Networks for Fun and Profit: a Case Study of Cyber Criminal Ecosystem on Twitter</article-title>. In: <conf-name>Proceedings of the 21st international conference on World Wide Web</conf-name> (<year>2012</year>). p. <fpage>71</fpage>&#x2013;<lpage>80</lpage>. </citation>
</ref>
<ref id="B70">
<label>70.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Hou</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>Detection of Social Network Spam Based on Improved Extreme Learning Machine</article-title>. <source>IEEE Access</source> (<year>2020</year>) <volume>8</volume>:<fpage>112003</fpage>&#x2013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1109/access.2020.3002940</pub-id> </citation>
</ref>
<ref id="B71">
<label>71.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gao</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Gong</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Xie</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Qin</surname>
<given-names>AK</given-names>
</name>
</person-group>. <article-title>An Attention-Based Unsupervised Adversarial Model for Movie Review Spam Detection</article-title>. <source>IEEE Trans multimedia</source> (<year>2020</year>) <volume>23</volume>:<fpage>784</fpage>&#x2013;<lpage>96</lpage>. </citation>
</ref>
<ref id="B72">
<label>72.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Quinn</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Olszewska</surname>
<given-names>JI</given-names>
</name>
</person-group>. <article-title>British Sign Language Recognition in the Wild Based on Multi-Class Svm</article-title>. In: <conf-name>2019 Federated Conference on Computer Science and Information Systems (FedCSIS)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2019</year>). p. <fpage>81</fpage>&#x2013;<lpage>6</lpage>. </citation>
</ref>
<ref id="B73">
<label>73.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>An</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Cho</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Variational Autoencoder Based Anomaly Detection Using Reconstruction Probability</article-title>. <source>Spec Lecture IE</source> (<year>2015</year>) <volume>2</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>18</lpage>. </citation>
</ref>
<ref id="B74">
<label>74.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhao</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Xin</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>A Heterogeneous Ensemble Learning Framework for Spam Detection in Social Networks with Imbalanced Data</article-title>. <source>Appl Sci</source> (<year>2020</year>) <volume>10</volume>(<issue>3</issue>):<fpage>936</fpage>. <pub-id pub-id-type="doi">10.3390/app10030936</pub-id> </citation>
</ref>
<ref id="B75">
<label>75.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>6 Million Spam Tweets: A Large Ground Truth for Timely Twitter Spam Detection</article-title>. In: <conf-name>2015 IEEE international conference on communications (ICC)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2015</year>). p. <fpage>7065</fpage>&#x2013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1109/icc.2015.7249453</pub-id> </citation>
</ref>
<ref id="B76">
<label>76.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Sze-To</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>AKC</given-names>
</name>
</person-group>. <article-title>A Weight-Selection Strategy on Training Deep Neural Networks for Imbalanced Classification</article-title>. In: <conf-name>International Conference Image Analysis and Recognition</conf-name>. <publisher-name>Springer</publisher-name> (<year>2017</year>). p. <fpage>3</fpage>&#x2013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-59876-5_1</pub-id> </citation>
</ref>
<ref id="B77">
<label>77.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Alom</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Carminati</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Ferrari</surname>
<given-names>E</given-names>
</name>
</person-group>. <source>A Deep Learning Model for Twitter Spam Detection</source>, <volume>18</volume>. <publisher-name>Online Social Networks and Media</publisher-name> (<year>2020</year>). p. <fpage>100079</fpage>. <pub-id pub-id-type="doi">10.1016/j.osnem.2020.100079</pub-id>
<article-title>A Deep Learning Model for Twitter Spam Detection</article-title>
<source>Online Soc Networks Media</source> </citation>
</ref>
<ref id="B78">
<label>78.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Swe</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Myo</surname>
<given-names>NN</given-names>
</name>
</person-group>. <article-title>Fake Accounts Detection on Twitter Using Blacklist</article-title>. In: <conf-name>2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2018</year>). p. <fpage>562</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1109/icis.2018.8466499</pub-id> </citation>
</ref>
<ref id="B79">
<label>79.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Neha</surname>
<given-names>MV</given-names>
</name>
<name>
<surname>Nair</surname>
<given-names>MS</given-names>
</name>
</person-group>. <article-title>A Novel Twitter Spam Detection Technique by Integrating Inception Network with Attention Based Lstm</article-title>. In: <conf-name>2021 5th International Conference on Trends in Electronics and Informatics (ICOEI)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2021</year>). p. <fpage>1009</fpage>&#x2013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1109/icoei51242.2021.9452825</pub-id> </citation>
</ref>
<ref id="B80">
<label>80.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arora</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Liang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Risteski</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>A Latent Variable Model Approach to Pmi-Based Word Embeddings</article-title>. <source>Tacl</source> (<year>2016</year>) <volume>4</volume>:<fpage>385</fpage>&#x2013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.1162/tacl_a_00106</pub-id> </citation>
</ref>
<ref id="B81">
<label>81.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Szegedy</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Zaremba</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Sutskever</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Bruna</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Erhan</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Goodfellow</surname>
<given-names>I</given-names>
</name>
<etal/>
</person-group> <article-title>Intriguing Properties of Neural Networks</article-title>. <conf-name>2nd International Conference on Learning Representations, ICLR 2014-Conference Track Proceedings</conf-name> (<year>2013</year>), <publisher-loc>Banff, AB, Canada</publisher-loc>: <publisher-name>Elsevier Inc</publisher-name>. </citation>
</ref>
<ref id="B82">
<label>82.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Goodfellow</surname>
<given-names>IJ</given-names>
</name>
<name>
<surname>Shlens</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Szegedy</surname>
<given-names>C</given-names>
</name>
</person-group>. <source>Explaining and Harnessing Adversarial Examples</source> (<year>2014</year>). <conf-name>3rd International Conference on Learning Representations, ICLR 2015-Conference Track Proceedings</conf-name>, <publisher-loc>San Diego, CA, United states</publisher-loc>: <publisher-name>Elsevier Inc</publisher-name>.</citation>
</ref>
<ref id="B83">
<label>83.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Luo</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Boix</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Roig</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Poggio</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>Z</given-names>
</name>
</person-group>. <source>Foveation-based Mechanisms Alleviate Adversarial Examples</source> (<year>2015</year>). <comment>arXiv preprint arXiv:1511.06292</comment>. </citation>
</ref>
<ref id="B84">
<label>84.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Gilmer</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Metz</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Faghri</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Schoenholz</surname>
<given-names>SS</given-names>
</name>
<name>
<surname>Raghu</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Martin</surname>
<given-names>W</given-names>
</name>
<etal/>
</person-group> <source>Adversarial Spheres</source> (<year>2018</year>). <comment>arXiv preprint arXiv:1801.02774</comment>. </citation>
</ref>
<ref id="B85">
<label>85.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Ilyas</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Santurkar</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Tsipras</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Engstrom</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Tran</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Madry</surname>
<given-names>A</given-names>
</name>
</person-group>. <source>Adversarial Examples Are Not Bugs, They Are Features</source> (<year>2019</year>). <comment>arXiv preprint arXiv:1905.02175</comment>. </citation>
</ref>
<ref id="B86">
<label>86.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yuan</surname>
<given-names>X</given-names>
</name>
<name>
<surname>He</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X</given-names>
</name>
</person-group>. <article-title>Adversarial Examples: Attacks and Defenses for Deep Learning</article-title>. <source>IEEE Trans Neural Netw Learn Syst.</source> (<year>2019</year>) <volume>30</volume>(<issue>9</issue>):<fpage>2805</fpage>&#x2013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1109/tnnls.2018.2886017</pub-id> </citation>
</ref>
<ref id="B87">
<label>87.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ji</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Du</surname>
<given-names>TY</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>JF</given-names>
</name>
<name>
<surname>Shen</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>B</given-names>
</name>
</person-group>. <article-title>Application of Artificial Intelligence Technology in English Online Learning Platform</article-title>. <source>J&#x20;Softw</source> (<year>2021</year>) <volume>32</volume>(<issue>1</issue>):<fpage>41</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-89508-2_6</pub-id> </citation>
</ref>
<ref id="B88">
<label>88.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Carlini</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Wagner</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>Towards Evaluating the Robustness of Neural Networks</article-title>. In: <conf-name>2017 ieee symposium on security and privacy (sp)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2017</year>). p. <fpage>39</fpage>&#x2013;<lpage>57</lpage>. <pub-id pub-id-type="doi">10.1109/sp.2017.49</pub-id> </citation>
</ref>
<ref id="B89">
<label>89.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Carlini</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Wagner</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods</article-title>. In: <conf-name>Proceedings of the 10th ACM workshop on artificial intelligence and security</conf-name> (<year>2017</year>). p. <fpage>3</fpage>&#x2013;<lpage>14</lpage>. </citation>
</ref>
<ref id="B90">
<label>90.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Carlini</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Wagner</surname>
<given-names>D</given-names>
</name>
</person-group>. <source>Magnet and&#x201d; Efficient Defenses against Adversarial Attacks&#x201d; Are Not Robust to Adversarial Examples</source> (<year>2017</year>). <comment>arXiv preprint arXiv:1711.08478</comment>. </citation>
</ref>
<ref id="B91">
<label>91.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>P-Y</given-names>
</name>
<name>
<surname>Sharma</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Yi</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Hsieh</surname>
<given-names>C-J</given-names>
</name>
</person-group>. <article-title>Ead: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples</article-title>. In: <conf-name>Thirty-second AAAI conference on artificial intelligence</conf-name> (<year>2018</year>). </citation>
</ref>
<ref id="B92">
<label>92.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kurakin</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Goodfellow</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>S</given-names>
</name>
</person-group>. <source>Adversarial Examples in the Physical World</source> (<year>2016</year>). </citation>
</ref>
<ref id="B93">
<label>93.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kurakin</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Goodfellow</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>S</given-names>
</name>
</person-group>. <source>Adversarial Machine Learning at Scale</source> (<year>2016</year>). <comment>arXiv preprint arXiv:1611.01236</comment>. </citation>
</ref>
<ref id="B94">
<label>94.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Madry</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Makelov</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Schmidt</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Tsipras</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Adrian</surname>
<given-names>V</given-names>
</name>
</person-group>. <source>Towards Deep Learning Models Resistant to Adversarial Attacks</source>. In: <conf-name>6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings</conf-name> (<year>2017</year>), <publisher-loc>Vancouver, BC, Canada</publisher-loc>. </citation>
</ref>
<ref id="B95">
<label>95.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Dong</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Pang</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>X</given-names>
</name>
<etal/>
</person-group> <article-title>Boosting Adversarial Attacks with Momentum</article-title>. In: <conf-name>Proceedings of the IEEE conference on computer vision and pattern recognition</conf-name> (<year>2018</year>). p. <fpage>9185</fpage>&#x2013;<lpage>93</lpage>. <pub-id pub-id-type="doi">10.1109/cvpr.2018.00957</pub-id> </citation>
</ref>
<ref id="B96">
<label>96.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Moosavi-Dezfooli</surname>
<given-names>S-M</given-names>
</name>
<name>
<surname>Fawzi</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Pascal</surname>
<given-names>F</given-names>
</name>
</person-group>. <article-title>Deepfool: a Simple and Accurate Method to Fool Deep Neural Networks</article-title>. In: <conf-name>Proceedings of the IEEE conference on computer vision and pattern recognition</conf-name> (<year>2016</year>). p. <fpage>2574</fpage>&#x2013;<lpage>82</lpage>. <pub-id pub-id-type="doi">10.1109/cvpr.2016.282</pub-id> </citation>
</ref>
<ref id="B97">
<label>97.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Baluja</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Fischer</surname>
<given-names>I</given-names>
</name>
</person-group>. <article-title>Learning to Attack: Adversarial Transformation Networks</article-title>. In: <conf-name>Thirty-second aaai conference on artificial intelligence</conf-name> (<year>2018</year>). </citation>
</ref>
<ref id="B98">
<label>98.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Xiao</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>J-Y</given-names>
</name>
<name>
<surname>He</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Song</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>Generating Adversarial Examples with Adversarial Networks</article-title>. In <conf-name>2021 IEEE International Conference on Image Processing (ICIP)</conf-name>. <publisher-loc>Stockholm, Sweden</publisher-loc>: <publisher-name>International Joint Conferences on Artificial Intelligence Organization</publisher-name> (<year>2018</year>). </citation>
</ref>
<ref id="B99">
<label>99.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Bai</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Han</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>B</given-names>
</name>
<etal/>
</person-group> <article-title>Ai-gan: Attack-Inspired Generation of Adversarial Examples</article-title>. In: <conf-name>2021 IEEE International Conference on Image Processing (ICIP)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2021</year>), <volume>35</volume>(<issue>10</issue>):<fpage>2543</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1109/icip42928.2021.9506278</pub-id> </citation>
</ref>
<ref id="B100">
<label>100.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Mao</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Xue</surname>
<given-names>H</given-names>
</name>
</person-group>. <source>Composite Adversarial Attacks</source> (<year>2020</year>). <comment>arXiv preprint arXiv:2012.05434</comment>. </citation>
</ref>
<ref id="B101">
<label>101.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>P-Y</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Sharma</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Yi</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Hsieh</surname>
<given-names>C-J</given-names>
</name>
</person-group>. <article-title>Zoo: Zeroth Order Optimization Based Black-Box Attacks to Deep Neural Networks without Training Substitute Models</article-title>. In: <conf-name>Proceedings of the 10th ACM workshop on artificial intelligence and security</conf-name> (<year>2017</year>). p. <fpage>15</fpage>&#x2013;<lpage>26</lpage>. </citation>
</ref>
<ref id="B102">
<label>102.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ilyas</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Engstrom</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Athalye</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>Black-box Adversarial Attacks with Limited Queries and Information</article-title>. In: <conf-name>International Conference on Machine Learning</conf-name>. <publisher-loc>Stockholm, Sweden</publisher-loc>: <publisher-name>PMLR</publisher-name> (<year>2018</year>). p. <fpage>2137</fpage>&#x2013;<lpage>46</lpage>. </citation>
</ref>
<ref id="B103">
<label>103.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Tim Salimans</surname>
<given-names>TH</given-names>
</name>
<name>
<surname>Jonathan</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Sidor</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Sutskever</surname>
<given-names>I</given-names>
</name>
</person-group>. <source>Evolution Strategies as a Scalable Alternative to Reinforcement Learning</source> (<year>2017</year>). <comment>arXiv preprint arXiv:1703.03864</comment>. </citation>
</ref>
<ref id="B104">
<label>104.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tu</surname>
<given-names>C-C</given-names>
</name>
<name>
<surname>Ting</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>P-Y</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Yi</surname>
<given-names>J</given-names>
</name>
<etal/>
</person-group> <article-title>Autozoom: Autoencoder-Based Zeroth Order Optimization Method for Attacking Black-Box Neural Networks</article-title>. <source>Aaai</source> (<year>2019</year>) <volume>33</volume>:<fpage>742</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1609/aaai.v33i01.3301742</pub-id> </citation>
</ref>
<ref id="B105">
<label>105.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Du</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>JT</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>J</given-names>
</name>
</person-group>. <source>Query-efficient Meta Attack to Deep Neural Networks</source> (<year>2019</year>). <comment>arXiv preprint arXiv:1906.02398</comment>. </citation>
</ref>
<ref id="B106">
<label>106.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Zeng</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Xia</surname>
<given-names>S-T</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Improving Query Efficiency of Black-Box Adversarial Attack</article-title>. In: <conf-name>Computer Vision&#x2013;ECCV 2020: 16th European Conference</conf-name>; <conf-date>August 23&#x2013;28, 2020</conf-date>; <conf-loc>Glasgow, UK</conf-loc>. <publisher-name>Springer</publisher-name> (<year>2020</year>). p. <fpage>101</fpage>&#x2013;<lpage>16</lpage>. </citation>
</ref>
<ref id="B107">
<label>107.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Jordan</surname>
<given-names>MI</given-names>
</name>
<name>
<surname>Hopskipjumpattack</surname>
<given-names>MJW</given-names>
</name>
</person-group>. <article-title>A Query-Efficient Decision-Based Attack</article-title>. In: <conf-name>2020 ieee symposium on security and privacy (sp)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2020</year>). p. <fpage>1277</fpage>&#x2013;<lpage>94</lpage>. </citation>
</ref>
<ref id="B108">
<label>108.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Papernot</surname>
<given-names>N</given-names>
</name>
<name>
<surname>McDaniel</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Goodfellow</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Jha</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Celik</surname>
<given-names>ZB</given-names>
</name>
<name>
<surname>Swami</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>Practical Black-Box Attacks against Machine Learning</article-title>. In: <conf-name>Proceedings of the 2017 ACM on Asia conference on computer and communications security</conf-name> (<year>2017</year>). p. <fpage>506</fpage>&#x2013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1145/3052973.3053009</pub-id> </citation>
</ref>
<ref id="B109">
<label>109.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>C</given-names>
</name>
</person-group>. <article-title>Dast: Data-free Substitute Training for Adversarial Attacks</article-title>. In: <conf-name>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</conf-name> (<year>2020</year>). p. <fpage>234</fpage>&#x2013;<lpage>43</lpage>. <pub-id pub-id-type="doi">10.1109/cvpr42600.2020.00031</pub-id> </citation>
</ref>
<ref id="B110">
<label>110.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Junhai</surname>
<given-names>Y</given-names>
</name>
</person-group>. <source>Switching Transferable Gradient Directions for Query-Efficient Black-Box Adversarial Attacks</source> (<year>2020</year>). <comment>arXiv preprint arXiv:2009.07191</comment>. </citation>
</ref>
<ref id="B111">
<label>111.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Hermes Attack: Steal {DNN} Models with Lossless Inference Accuracy</article-title>. In: <conf-name>30th {USENIX} Security Symposium ({USENIX} Security 21)</conf-name> (<year>2021</year>). </citation>
</ref>
<ref id="B112">
<label>112.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Yin</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Yao</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Fu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Ding</surname>
<given-names>S</given-names>
</name>
<etal/>
</person-group> <article-title>Delving into Data: Effectively Substitute Training for Black-Box Attack</article-title>. In: <conf-name>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</conf-name> (<year>2021</year>). p. <fpage>4761</fpage>&#x2013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1109/cvpr46437.2021.00473</pub-id> </citation>
</ref>
<ref id="B113">
<label>113.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Jun-Hai</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Simulating Unknown Target Models for Query-Efficient Black-Box Attacks</article-title>. In: <conf-name>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</conf-name> (<year>2021</year>). p. <fpage>11835</fpage>&#x2013;<lpage>44</lpage>. <pub-id pub-id-type="doi">10.1109/cvpr46437.2021.01166</pub-id> </citation>
</ref>
<ref id="B114">
<label>114.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Das</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Shanbhogue</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>S-T</given-names>
</name>
<name>
<surname>Hohman</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>L</given-names>
</name>
<etal/>
</person-group> <article-title>Shield: Fast, Practical Defense and Vaccination for Deep Learning Using Jpeg Compression</article-title>. In: <conf-name>Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &#x26; Data Mining</conf-name> (<year>2018</year>). p. <fpage>196</fpage>&#x2013;<lpage>204</lpage>. </citation>
</ref>
<ref id="B115">
<label>115.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Cheng</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Wei</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Fu</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>S-W</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Defense for Adversarial Videos by Self-Adaptive Jpeg Compression and Optical Texture</article-title>. In: <conf-name>Proceedings of the 2nd ACM International Conference on Multimedia in Asia</conf-name> (<year>2021</year>). p. <fpage>1</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1145/3444685.3446308</pub-id> </citation>
</ref>
<ref id="B116">
<label>116.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Samangouei</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Kabkab</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Chellappa</surname>
<given-names>R</given-names>
</name>
</person-group>. <article-title>Defense-gan: Protecting Classifiers against Adversarial Attacks Using Generative Models</article-title>. <conf-name>6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings</conf-name>, <publisher-loc>Vancouver, BC, Canada</publisher-loc> (<year>2018</year>). </citation>
</ref>
<ref id="B117">
<label>117.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Hwang</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Jang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Yoon</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Cho</surname>
<given-names>NI</given-names>
</name>
</person-group>. <article-title>Puvae: A Variational Autoencoder to Purify Adversarial Examples</article-title>. <source>IEEE Access</source> (<year>2019</year>) <volume>7</volume>:<fpage>126582</fpage>&#x2013;<lpage>93</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2019.2939352</pub-id> </citation>
</ref>
<ref id="B118">
<label>118.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>W-A</given-names>
</name>
<name>
<surname>Balaji</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Samangouei</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Chellappa</surname>
<given-names>R</given-names>
</name>
</person-group>. <source>Invert and Defend: Model-Based Approximate Inversion of Generative Adversarial Networks for Secure Inference</source> (<year>2019</year>). <comment>arXiv preprint arXiv:1911.10291</comment>. </citation>
</ref>
<ref id="B119">
<label>119.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Rao</surname>
<given-names>Q</given-names>
</name>
</person-group>. <article-title>Defense against Adversarial Attacks by Reconstructing Images</article-title>. <source>IEEE Trans Image Process</source> (<year>2021</year>) <volume>30</volume>:<fpage>6117</fpage>&#x2013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1109/tip.2021.3092582</pub-id> </citation>
</ref>
<ref id="B120">
<label>120.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Hou</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Zha</surname>
<given-names>H</given-names>
</name>
<etal/>
</person-group> <article-title>Detection Based Defense against Adversarial Examples from the Steganalysis point of View</article-title>. In: <conf-name>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</conf-name> (<year>2019</year>). p. <fpage>4825</fpage>&#x2013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.1109/cvpr.2019.00496</pub-id> </citation>
</ref>
<ref id="B121">
<label>121.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yin</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Luo</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Shi</surname>
<given-names>YQ</given-names>
</name>
<etal/>
</person-group> <article-title>SmsNet: A New Deep Convolutional Neural Network Model for Adversarial Example Detection</article-title>. <source>IEEE Trans Multimedia</source> (<year>2021</year>), <publisher-name>IEEE</publisher-name>. </citation>
</ref>
<ref id="B122">
<label>122.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tian</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Jia</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain</article-title>. <source>Proc. AAAI Conf. Art. Intel.</source> (<year>2021</year>) <volume>35</volume>
<issue>11</issue>:<fpage>9877</fpage>&#x2013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1016/j.ins.2021.01.035</pub-id> </citation>
</ref>
<ref id="B123">
<label>123.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Shangguan</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Ji</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Act-detector: Adaptive Channel Transformation-Based Light-Weighted Detector for Adversarial Attacks</article-title>. <source>Inf Sci</source> (<year>2021</year>) <volume>564</volume>:<fpage>163</fpage>&#x2013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1016/j.ins.2021.01.035</pub-id> </citation>
</ref>
<ref id="B124">
<label>124.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Evan Sutanto</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Real-time Adversarial Attack Detection with Deep Image Prior Initialized as a High-Level Representation Based Blurring Network</article-title>. <source>Electronics</source> (<year>2021</year>) <volume>10</volume>(<issue>1</issue>):<fpage>52</fpage>. <pub-id pub-id-type="doi">10.3390/electronics10010052</pub-id> </citation>
</ref>
<ref id="B125">
<label>125.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liang</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Shi</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>X</given-names>
</name>
</person-group>. <article-title>Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction</article-title>. <source>IEEE Trans Dependable Secure Comput</source> (<year>2018</year>) <volume>18</volume>(<issue>1</issue>):<fpage>72</fpage>&#x2013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1109/TDSC.2018.2874243</pub-id> </citation>
</ref>
<ref id="B126">
<label>126.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Bai</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Luo</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Wen</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Q</given-names>
</name>
</person-group>. <source>Recent Advances in Adversarial Training for Adversarial Robustness</source> (<year>2021</year>). <comment>arXiv preprint arXiv:2102.01356</comment>. </citation>
</ref>
<ref id="B127">
<label>127.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Ali</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Najibi</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Amin</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>John</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Studer</surname>
<given-names>C</given-names>
</name>
<etal/>
</person-group> <source>Adversarial Training for Free!</source> (<year>2019</year>) <publisher-loc>Vancouver, BC, Canada</publisher-loc>, <volume>32</volume>. </citation>
</ref>
<ref id="B128">
<label>128.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Dong</surname>
<given-names>B</given-names>
</name>
</person-group>. <source>You Only Propagate once: Accelerating Adversarial Training via Maximal Principle</source> (<year>2019</year>) <publisher-loc>Vancouver, BC, Canada</publisher-loc>, <volume>32</volume>. </citation>
</ref>
<ref id="B129">
<label>129.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Wong</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Rice</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Kolter</surname>
<given-names>JZ</given-names>
</name>
</person-group>. <source>Fast Is Better than Free: Revisiting Adversarial Training</source> (<year>2020</year>). <comment>arXiv preprint arXiv:2001.03994</comment>. </citation>
</ref>
<ref id="B130">
<label>130.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Gowal</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Dvijotham</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Stanforth</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Bunel</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Qin</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Uesato</surname>
<given-names>J</given-names>
</name>
<etal/>
</person-group> <source>On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models</source> (<year>2018</year>). <comment>arXiv preprint arXiv:1810.12715</comment>. </citation>
</ref>
<ref id="B131">
<label>131.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Xiao</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Gowal</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Stanforth</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>B</given-names>
</name>
<etal/>
</person-group> <article-title>Towards Stable and Efficient Training of Verifiably Robust Neural Networks</article-title>. <source>Adv. Neural Inform. Process. Syst.</source> (<year>2019</year>) <volume>2018</volume>(<issue>10495258</issue>):<fpage>4939</fpage>&#x2013;<lpage>4948</lpage>. </citation>
</ref>
<ref id="B132">
<label>132.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Weng</surname>
<given-names>T-W</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>P-Y</given-names>
</name>
<name>
<surname>Hsieh</surname>
<given-names>C-J</given-names>
</name>
<name>
<surname>Daniel</surname>
<given-names>L</given-names>
</name>
</person-group>. <article-title>Efficient Neural Network Robustness Certification with General Activation Functions</article-title>. <conf-name>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics</conf-name>. <publisher-loc>Virtual, United states</publisher-loc>: <publisher-name>Association for Computational Linguistics</publisher-name> (<year>2018</year>), <fpage>6066</fpage>&#x2013;<lpage>80</lpage>. </citation>
</ref>
<ref id="B133">
<label>133.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Gao</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Lanchantin</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Lou Soffa</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers</article-title>. In: <conf-name>2018 IEEE Security and Privacy Workshops (SPW)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2018</year>). p. <fpage>50</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1109/spw.2018.00016</pub-id> </citation>
</ref>
<ref id="B134">
<label>134.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vijayaraghavan</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Roy</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model</article-title>. <source>Mach. Learn. Knowl. Disc. Datab.</source> (<year>2019</year>). p. <fpage>711</fpage>&#x2013;<lpage>726</lpage>. </citation>
</ref>
<ref id="B135">
<label>135.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Iyyer</surname>
<given-names>M</given-names>
</name>
<name>
<surname>John</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Gimpel</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Zettlemoyer</surname>
<given-names>L</given-names>
</name>
</person-group>. <article-title>Adversarial Example Generation with Syntactically Controlled Paraphrase Networks</article-title>. In: <conf-name>NAACL HLT 2018-2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies-Proceedings of the Conference</conf-name>. <publisher-loc>New Orleans, LA, United states</publisher-loc> (<year>2018</year>). p. <fpage>1875</fpage>&#x2013;<lpage>85</lpage>. </citation>
</ref>
<ref id="B136">
<label>136.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ren</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Tang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>Y</given-names>
</name>
<etal/>
</person-group> <article-title>Generating Natural Language Adversarial Examples on a Large Scale With Generative Models</article-title>. <source>Front. Artif. Intell. App.</source> (<year>2020</year>) <volume>325</volume>(<issue>09226389</issue>):<fpage>2156</fpage>&#x2013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.3233/FAIA200340</pub-id> </citation>
</ref>
<ref id="B137">
<label>137.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Ji</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Du</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>T</given-names>
</name>
</person-group>. <article-title>Textbugger: Generating Adversarial Text against Real-World Applications</article-title>. <source>IEEE Access</source> (<year>2020</year>) <volume>8</volume>:<fpage>79561</fpage>&#x2013;<lpage>72</lpage>. </citation>
</ref>
<ref id="B138">
<label>138.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Xue</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Qiu</surname>
<given-names>X</given-names>
</name>
</person-group>. <source>Bert-attack: Adversarial Attack against Bert Using Bert</source> (<year>2020</year>). <comment>arXiv preprint arXiv:2004.09984</comment>. </citation>
</ref>
<ref id="B139">
<label>139.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jin</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Jin</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>JT</given-names>
</name>
<name>
<surname>Szolovits</surname>
<given-names>P</given-names>
</name>
</person-group>. <article-title>Is Bert Really Robust? a strong Baseline for Natural Language Attack on Text Classification and Entailment</article-title>. <source>Aaai</source> (<year>2020</year>) <volume>34</volume>:<fpage>8018</fpage>&#x2013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1609/aaai.v34i05.6311</pub-id> </citation>
</ref>
<ref id="B140">
<label>140.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Alzantot</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sharma</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Ahmed</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Ho</surname>
<given-names>B-J</given-names>
</name>
<name>
<surname>Srivastava</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>K-W</given-names>
</name>
</person-group>. <article-title>Generating Natural Language Adversarial Examples</article-title> in. <conf-name>Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018</conf-name>. <publisher-loc>Virtual, United states</publisher-loc> (<year>2018</year>), <fpage>2890</fpage>&#x2013;<lpage>96</lpage>. </citation>
</ref>
<ref id="B141">
<label>141.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cheng</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>G-Q</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Pei</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Wordchange: Adversarial Examples Generation Approach for Chinese Text Classification</article-title>. <source>IEEE Access</source> (<year>2020</year>) <volume>8</volume>:<fpage>79561</fpage>&#x2013;<lpage>72</lpage>. </citation>
</ref>
<ref id="B142">
<label>142.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Peng</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Brockett</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>M-T</given-names>
</name>
<etal/>
</person-group> <article-title>Contextualized Perturbation for Textual Adversarial Attack</article-title> in. <conf-name>Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</conf-name>. <publisher-name>Association for Computational Linguistics</publisher-name> (<year>2020</year>), <fpage>5053</fpage>&#x2013;<lpage>69</lpage>. </citation>
</ref>
<ref id="B143">
<label>143.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Garg</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Bae</surname>
<given-names>GR</given-names>
</name>
</person-group>. <article-title>Bert-based Adversarial Examples for Text Classification</article-title>. In: <conf-name>Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)</conf-name> (<year>2020</year>). p. <fpage>6174</fpage>&#x2013;<lpage>81</lpage>. </citation>
</ref>
<ref id="B144">
<label>144.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Maheshwary</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Maheshwary</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Pudi</surname>
<given-names>V</given-names>
</name>
</person-group>. <article-title>Generating Natural Language Attacks in a Hard Label Black Box Setting</article-title>. In: <conf-name>Proceedings of the 35th AAAI Conference on Artificial Intelligence</conf-name> (<year>2021</year>). </citation>
</ref>
<ref id="B145">
<label>145.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Yuan</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Q</given-names>
</name>
<etal/>
</person-group> <source>Word-level Textual Adversarial Attacking as Combinatorial Optimization</source> (<year>2019</year>). <comment>arXiv preprint arXiv:1910.12196</comment>. </citation>
</ref>
<ref id="B146">
<label>146.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Gong</surname>
<given-names>NZ</given-names>
</name>
<name>
<surname>Cai</surname>
<given-names>Y</given-names>
</name>
</person-group>. <source>Fake Co-visitation Injection Attacks to Recommender Systems</source>. <publisher-name>NDSS</publisher-name> (<year>2017</year>). </citation>
</ref>
<ref id="B147">
<label>147.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Fang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Gong</surname>
<given-names>NZ</given-names>
</name>
<name>
<surname>Jia</surname>
<given-names>L</given-names>
</name>
</person-group>. <article-title>Poisoning Attacks to Graph-Based Recommender Systems</article-title>. In: <conf-name>Proceedings of the 34th Annual Computer Security Applications Conference</conf-name> (<year>2018</year>). p. <fpage>381</fpage>&#x2013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1145/3274694.3274706</pub-id> </citation>
</ref>
<ref id="B148">
<label>148.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Christakopoulou</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Banerjee</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>Adversarial Attacks on an Oblivious Recommender</article-title>. In: <conf-name>Proceedings of the 13th ACM Conference on Recommender Systems</conf-name> (<year>2019</year>). p. <fpage>322</fpage>&#x2013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1145/3298689.3347031</pub-id> </citation>
</ref>
<ref id="B149">
<label>149.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sun</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Tang</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Hsieh</surname>
<given-names>T-Y</given-names>
</name>
<name>
<surname>Honavar</surname>
<given-names>V</given-names>
</name>
</person-group>. <article-title>Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach</article-title>. <source>Proc Web Conf</source> (<year>2020</year>) <volume>2020</volume>:<fpage>673</fpage>&#x2013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1145/3366423.3380149</pub-id> </citation>
</ref>
<ref id="B150">
<label>150.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Song</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>J</given-names>
</name>
<etal/>
</person-group> <article-title>Poisonrec: an Adaptive Data Poisoning Framework for Attacking Black-Box Recommender Systems</article-title>. In: <conf-name>2020 IEEE 36th International Conference on Data Engineering (ICDE)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2020</year>). p. <fpage>157</fpage>&#x2013;<lpage>68</lpage>. <pub-id pub-id-type="doi">10.1109/icde48307.2020.00021</pub-id> </citation>
</ref>
<ref id="B151">
<label>151.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Chang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Cui</surname>
<given-names>P</given-names>
</name>
<etal/>
</person-group> <source>Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge</source> (<year>2021</year>). <comment>arXiv preprint arXiv:2105.12419</comment>. </citation>
</ref>
<ref id="B152">
<label>152.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Fang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Gong</surname>
<given-names>NZ</given-names>
</name>
<name>
<surname>Jia</surname>
<given-names>L</given-names>
</name>
</person-group>. <article-title>Influence Function Based Data Poisoning Attacks to Top-N Recommender Systems</article-title>. In: <conf-name>Proceedings of The Web Conference</conf-name> (<year>20202020</year>). p. <fpage>3019</fpage>&#x2013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1145/3366423.3380072</pub-id> </citation>
</ref>
<ref id="B153">
<label>153.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Singh</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Vorobeychik</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Data Poisoning Attacks on Factorization-Based Collaborative Filtering</article-title>. <source>Adv Neural Inf Process Syst</source> (<year>2016</year>) <volume>29</volume>:<fpage>1885</fpage>&#x2013;<lpage>93</lpage>. </citation>
</ref>
<ref id="B154">
<label>154.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Xiao</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>Q</given-names>
</name>
</person-group>. <article-title>Attacking Recommender Systems with Augmented User Profiles</article-title>. In: <conf-name>Proceedings of the 29th ACM International Conference on Information &#x26; Knowledge Management</conf-name> (<year>2020</year>). p. <fpage>855</fpage>&#x2013;<lpage>64</lpage>. <pub-id pub-id-type="doi">10.1145/3340531.3411884</pub-id> </citation>
</ref>
<ref id="B155">
<label>155.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Fan</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Tyler</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Yao</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J</given-names>
</name>
<etal/>
</person-group> <article-title>Attacking Black-Box Recommendations via Copying Cross-Domain User Profiles</article-title>. In: <conf-name>2021 IEEE 37th International Conference on Data Engineering (ICDE)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2021</year>). p. <fpage>1583</fpage>&#x2013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1109/icde51399.2021.00140</pub-id> </citation>
</ref>
<ref id="B156">
<label>156.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Zhan</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Pei</surname>
<given-names>X</given-names>
</name>
</person-group>. <source>Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-Based Attack and Defense</source> (<year>2021</year>). <comment>arXiv preprint arXiv:2104.15061</comment>. </citation>
</ref>
<ref id="B157">
<label>157.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Ben</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Baskin</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Zheltonozhskii</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Alon</surname>
<given-names>U</given-names>
</name>
</person-group>. <source>Single-node Attack for Fooling Graph Neural Networks</source> (<year>2020</year>). <comment>arXiv preprint arXiv:2011.03574</comment>. </citation>
</ref>
<ref id="B158">
<label>158.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Mu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Gong</surname>
<given-names>NZ</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>M</given-names>
</name>
</person-group>. <source>Data Poisoning Attacks to Deep Learning Based Recommender Systems</source> (<year>2021</year>). <comment>arXiv preprint arXiv:2101.02644</comment>. </citation>
</ref>
<ref id="B159">
<label>159.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Lian</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Ge</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>E</given-names>
</name>
</person-group>. <article-title>Triple Adversarial Learning for Influence Based Poisoning Attack in Recommender Systems</article-title>. In: <conf-name>Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &#x26; Data Mining</conf-name> (<year>2021</year>). p. <fpage>1830</fpage>&#x2013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1145/3447548.3467335</pub-id> </citation>
</ref>
<ref id="B160">
<label>160.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Pruthi</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Dhingra</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Lipton</surname>
<given-names>ZC</given-names>
</name>
</person-group>. <article-title>Combating Adversarial Misspellings with Robust Word Recognition</article-title> in. <conf-name>ACL 2019-57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference</conf-name>. <publisher-loc>Florence, Italy</publisher-loc> (<year>2020</year>), <fpage>5582</fpage>&#x2013;<lpage>91</lpage>. </citation>
</ref>
<ref id="B161">
<label>161.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Jia</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Raghunathan</surname>
<given-names>A</given-names>
</name>
<name>
<surname>G&#xf6;ksel</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Liang</surname>
<given-names>P</given-names>
</name>
</person-group>. <article-title>Certified Robustness to Adversarial Word Substitutions</article-title> in. <conf-name>EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference</conf-name>. <publisher-loc>Hong Kong, China</publisher-loc> (<year>2019</year>), <fpage>4129</fpage>&#x2013;<lpage>42</lpage>. </citation>
</ref>
<ref id="B162">
<label>162.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>J-Y</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>K-W</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification</article-title> in. <conf-name>EMNLP-IJCNLP 2019-2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference</conf-name>. <publisher-loc>Hong Kong, China</publisher-loc> (<year>2019</year>), <fpage>4904</fpage>&#x2013;<lpage>13</lpage>. </citation>
</ref>
<ref id="B163">
<label>163.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Si</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Q</given-names>
</name>
<etal/>
</person-group> <source>Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust fine-tuning</source> (<year>2020</year>). <comment>arXiv preprint arXiv:2012.15699</comment>. </citation>
</ref>
<ref id="B164">
<label>164.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ren</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Deng</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>He</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Che</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency</article-title>. In: <conf-name>Proceedings of the 57th annual meeting of the association for computational linguistics</conf-name>. <publisher-loc>Florence, Italy</publisher-loc> (<year>2019</year>). p. <fpage>1085</fpage>&#x2013;<lpage>97</lpage>. <pub-id pub-id-type="doi">10.18653/v1/p19-1103</pub-id> </citation>
</ref>
<ref id="B165">
<label>165.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Tang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>GY</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z</given-names>
</name>
</person-group>. <article-title>Learning Multi-Level Dependencies for Robust Word Recognition</article-title>. <source>Proc. AAAI Conf. Artif. Intell.</source> (<year>2020</year>) <volume>34</volume>:<fpage>9250</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1609/aaai.v34i05.6463</pub-id> </citation>
</ref>
<ref id="B166">
<label>166.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Shi</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>K-W</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Hsieh</surname>
<given-names>C-J</given-names>
</name>
</person-group>. <article-title>Robustness Verification for Transformers</article-title> in. <source>Findings of the Association for Computational Linguistics: EMNLP 2020</source>. <publisher-name>Association for Computational Linguistics</publisher-name> (<year>2020</year>), <fpage>164</fpage>&#x2013;<lpage>171</lpage>. </citation>
</ref>
<ref id="B167">
<label>167.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ye</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Gong</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Q</given-names>
</name>
</person-group>. <article-title>Safer: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions</article-title>. In: <conf-name>Proceedings of the Annual Meeting of the Association for Computational Linguistics</conf-name>. <publisher-loc>Virtual, Online, United states</publisher-loc> (<year>2020</year>). p. <fpage>3465</fpage>&#x2013;<lpage>75</lpage>. </citation>
</ref>
<ref id="B168">
<label>168.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Mozes</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Stenetorp</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Bennett</surname>
<given-names>K</given-names>
</name>
<name>
<surname>LewisGriffin</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>Frequency-guided Word Substitutions for Detecting Textual Adversarial Examples</article-title> in. <conf-name>EACL 2021-16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference</conf-name>. <publisher-loc>Virtual</publisher-loc> (<year>2020</year>), <fpage>171</fpage>&#x2013;<lpage>86</lpage>. </citation>
</ref>
<ref id="B169">
<label>169.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Zeng</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>X</given-names>
</name>
</person-group>. <source>Certified Robustness to Text Adversarial Attacks by Randomized [mask]</source> (<year>2021</year>). <comment>arXiv preprint arXiv:2105.03743</comment>. </citation>
</ref>
<ref id="B170">
<label>170.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Ke</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>L</given-names>
</name>
</person-group>. <article-title>Textfirewall: Omni-Defending against Adversarial Texts in Sentiment Classification</article-title>. <source>IEEE Access</source> (<year>2021</year>) <volume>9</volume>:<fpage>27467</fpage>&#x2013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1109/access.2021.3058278</pub-id> </citation>
</ref>
<ref id="B171">
<label>171.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Karimi</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Rossi</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Prati</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>Adversarial Training for Aspect-Based Sentiment Analysis with Bert</article-title>. In: <conf-name>2020 25th International Conference on Pattern Recognition (ICPR)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2021</year>). p. <fpage>8797</fpage>&#x2013;<lpage>803</lpage>. <pub-id pub-id-type="doi">10.1109/icpr48806.2021.9412167</pub-id> </citation>
</ref>
<ref id="B172">
<label>172.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Hu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Shu</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Philip</surname>
<given-names>SY</given-names>
</name>
</person-group>. <article-title>Bert post-training for Review reading Comprehension and Aspect-Based Sentiment Analysis</article-title> in. <conf-name>NAACL HLT 2019-2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference</conf-name>. <publisher-loc>Minneapolis, MN, United states</publisher-loc> (<year>2019</year>), <fpage>2324</fpage>&#x2013;<lpage>35</lpage>. </citation>
</ref>
<ref id="B173">
<label>173.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Miyato</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Dai</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Goodfellow</surname>
<given-names>I</given-names>
</name>
</person-group>. <article-title>Adversarial Training Methods for Semi-supervised Text Classification</article-title> in. <conf-name>5th International Conference on Learning Representations, ICLR 2017-Conference Track Proceedings</conf-name>. <publisher-loc>Toulon, France</publisher-loc> (<year>2016</year>). </citation>
</ref>
<ref id="B174">
<label>174.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Du</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Fang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Yi</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Tao</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>Enhancing the Robustness of&#x20;Neural Collaborative Filtering Systems under Malicious Attacks</article-title>. <source>IEEE&#x20;Trans Multimedia</source> (<year>2018</year>) <volume>21</volume>(<issue>3</issue>):<fpage>555</fpage>&#x2013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1109/tmm.2018.2887018</pub-id> </citation>
</ref>
<ref id="B175">
<label>175.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Papernot</surname>
<given-names>N</given-names>
</name>
<name>
<surname>McDaniel</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Jha</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Swami</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks</article-title>. In: <conf-name>2016 IEEE symposium on security and privacy (SP)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2016</year>). p. <fpage>582</fpage>&#x2013;<lpage>97</lpage>. <pub-id pub-id-type="doi">10.1109/sp.2016.41</pub-id> </citation>
</ref>
<ref id="B176">
<label>176.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Du</surname>
<given-names>X</given-names>
</name>
<name>
<surname>He</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Qi</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Chua</surname>
<given-names>T-S</given-names>
</name>
</person-group>. <article-title>Adversarial Training towards Robust Multimedia Recommender System</article-title>. <source>IEEE Trans Knowledge Data Eng</source> (<year>2019</year>) <volume>32</volume>(<issue>5</issue>):<fpage>855</fpage>&#x2013;<lpage>67</lpage>. <pub-id pub-id-type="doi">10.1109/TKDE.2019.2893638</pub-id> </citation>
</ref>
<ref id="B177">
<label>177.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Manotumruksa</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yilmaz</surname>
<given-names>E</given-names>
</name>
</person-group>. <article-title>Sequential-based Adversarial Optimisation for Personalised Top-N Item Recommendation</article-title>. In: <conf-name>Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval</conf-name> (<year>2020</year>). p. <fpage>2045</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1145/3397271.3401264</pub-id> </citation>
</ref>
<ref id="B178">
<label>178.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Adversarial Learning to Compare: Self-Attentive Prospective Customer Recommendation in Location Based Social Networks</article-title>. In: <conf-name>Proceedings of the 13th International Conference on Web Search and Data Mining</conf-name> (<year>2020</year>). p. <fpage>349</fpage>&#x2013;<lpage>57</lpage>. </citation>
</ref>
<ref id="B179">
<label>179.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Han</surname>
<given-names>P</given-names>
</name>
</person-group>. <article-title>Adversarial Training-Based Mean Bayesian Personalized Ranking for Recommender System</article-title>. <source>IEEE Access</source> (<year>2019</year>) <volume>8</volume>:<fpage>7958</fpage>&#x2013;<lpage>68</lpage>. </citation>
</ref>
<ref id="B180">
<label>180.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Shahrasbi</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Mani</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Arrabothu</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Sharma</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Kannan</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>S</given-names>
</name>
</person-group>. <source>On Detecting Data Pollution Attacks on Recommender Systems Using Sequential gans</source> (<year>2020</year>). <comment>arXiv preprint arXiv:2012.02509</comment>. </citation>
</ref>
<ref id="B181">
<label>181.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Lian</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Ge</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training</article-title>. In: <conf-name>Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval</conf-name> (<year>2021</year>). p. <fpage>1074</fpage>&#x2013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1145/3404835.3462914</pub-id> </citation>
</ref>
<ref id="B182">
<label>182.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yi</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>P</given-names>
</name>
</person-group>. <article-title>Dual Adversarial Variational Embedding for Robust Recommendation</article-title>. <source>IEEE Trans Knowledge Data Eng</source> (<year>2021</year>). <pub-id pub-id-type="doi">10.1109/tkde.2021.3093773</pub-id> </citation>
</ref>
<ref id="B183">
<label>183.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Shi</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>M</given-names>
</name>
</person-group>. <source>Robustness to Modification with Shared Words in Paraphrase Identification</source> (<year>2019</year>). <comment>arXiv preprint arXiv:1909.02560</comment>. </citation>
</ref>
<ref id="B184">
<label>184.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>PangKoh</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Liang</surname>
<given-names>P</given-names>
</name>
</person-group>. <article-title>Understanding Black-Box Predictions via Influence Functions</article-title>. In: <conf-name>International Conference on Machine Learning</conf-name>. <publisher-loc>Sydney, NSW, Australia</publisher-loc>: <publisher-name>PMLR</publisher-name> (<year>2017</year>). p. <fpage>1885</fpage>&#x2013;<lpage>94</lpage>. </citation>
</ref>
<ref id="B185">
<label>185.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Xue</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Dpaeg: A Dependency Parse-Based Adversarial Examples Generation Method for Intelligent Q&#x26;a Robots</article-title>. <source>Security and Communication Networks</source>
<publisher-name>Hindawi</publisher-name> (<year>2020</year>). p. <fpage>2020</fpage>. </citation>
</ref>
<ref id="B186">
<label>186.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Deng</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Qin</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Meng</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Ding</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Qin</surname>
<given-names>Z</given-names>
</name>
</person-group>. <article-title>Attacking the Dialogue System at Smart home</article-title>. In: <conf-name>International Conference on Collaborative Computing: Networking, Applications and Worksharing</conf-name>. <publisher-name>Springer</publisher-name> (<year>2020</year>). p. <fpage>148</fpage>&#x2013;<lpage>58</lpage>. </citation>
</ref>
<ref id="B187">
<label>187.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ebrahimi</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Daniel</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Dou</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>Efficient Neural Network Robustness Certification with General Activation Functions</article-title> in. <conf-name>Proceedings of the 27th International Conference on Computational Linguistics</conf-name>. <publisher-loc>Santa Fe, New Mexico, USA</publisher-loc>: <publisher-name>Association for Computational Linguistics</publisher-name> (<year>2018</year>), <fpage>653</fpage>&#x2013;<lpage>63</lpage>. </citation>
</ref>
<ref id="B188">
<label>188.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Guzm&#xe1;n</surname>
<given-names>F</given-names>
</name>
<name>
<surname>El-Kishky</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Tang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Rubinstein</surname>
<given-names>BIP</given-names>
</name>
<etal/>
</person-group> <article-title>Putting Words into the System&#x2019;s Mouth: A Targeted Attack on Neural Machine Translation Using Monolingual Data Poisoning</article-title> <source>Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021</source> (<year>2021</year>), <fpage>1463</fpage>&#x2013;<lpage>73</lpage>. </citation>
</ref>
<ref id="B189">
<label>189.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Cheng</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Macherey</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Robust Neural Machine Translation with Doubly Adversarial Inputs</article-title> in. <conf-name>ACL 2019-57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference</conf-name>. <publisher-loc>Florence, Italy</publisher-loc> (<year>2019</year>), <fpage>4324</fpage>&#x2013;<lpage>33</lpage>. </citation>
</ref>
<ref id="B190">
<label>190.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Cheng</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Jiang</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Macherey</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Jacob</surname>
<given-names>E</given-names>
</name>
</person-group>. <article-title>Advaug: Robust Adversarial Augmentation for Neural Machine Translation</article-title> in. <conf-name>Proceedings of the Annual Meeting of the Association for Computational Linguistics</conf-name>. <publisher-loc>Virtual, United states</publisher-loc> (<year>2020</year>), <fpage>5961</fpage>&#x2013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.18653/v1/2020.acl-main.529</pub-id> </citation>
</ref>
</ref-list>
</back>
</article>