<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Med.</journal-id>
<journal-title>Frontiers in Medicine</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Med.</abbrev-journal-title>
<issn pub-type="epub">2296-858X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fmed.2024.1379211</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Medicine</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>A novel approach toward cyberbullying with intelligent recommendations using deep learning based blockchain solution</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Alabdali</surname> <given-names>Aliaa M.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1343587/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/funding-acquisition/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/project-administration/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Mashat</surname> <given-names>Arwa</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1773927/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/data-curation/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Faculty of Computing and Information Technology, King Abdulaziz University, Department of Information Technology</institution>, <addr-line>Rabigh</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff2"><sup>2</sup><institution>Faculty of Computing and Information Technology, King Abdulaziz University, Department of Information Systems</institution>, <addr-line>Rabigh</addr-line>, <country>Saudi Arabia</country></aff>
<author-notes>
<fn fn-type="edited-by" id="fn0001">
<p>Edited by: Sultan Ahmad, Prince Sattam Bin Abdulaziz University, Saudi Arabia</p>
</fn>
<fn fn-type="edited-by" id="fn0002">
<p>Reviewed by: Mahesh T. R, Jain University, India</p>
<p>Mohammad Tabrez Quasim, University of Bisha, Saudi Arabia</p>
<p>Poonam Chaudhary, The NorthCap University, India</p>
</fn>
<corresp id="c001">&#x002A;Correspondence: Arwa Mashat, <email>aasmashat@kau.edu.sa</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>02</day>
<month>04</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>11</volume>
<elocation-id>1379211</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>01</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>15</day>
<month>03</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2024 Alabdali and Mashat.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Alabdali and Mashat</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Integrating healthcare into traffic accident prevention through predictive modeling holds immense potential. Decentralized Defense presents a transformative vision for combating cyberbullying, prioritizing user privacy, fostering a safer online environment, and offering valuable insights for both healthcare and predictive modeling applications. As cyberbullying proliferates in social media, a pressing need exists for a robust and innovative solution that ensures user safety in the cyberspace. This paper aims toward introducing the approach of merging Blockchain and Federated Learning (FL), to create a decentralized AI solutions for cyberbullying. It has also used Alloy Language for formal modeling of social connections using specific declarations that are defined by the novel algorithm in the paper on two different datasets on Cyberbullying and are available online. The proposed novel method uses DBN to run established relation tests amongst the features in two phases, the first is LSTM to run tests to develop established features for the DBN layer and second is that these are run on various blocks of information of the blockchain. The performance of our proposed research is compared with the previous research and are evaluated using several metrics on creating the standard benchmarks for real world applications.</p>
</abstract>
<kwd-group>
<kwd>public health</kwd>
<kwd>prediction</kwd>
<kwd>health monitoring</kwd>
<kwd>blockchain</kwd>
<kwd>cyberbullying</kwd>
<kwd>federated learning</kwd>
<kwd>decision making</kwd>
</kwd-group>
<counts>
<fig-count count="8"/>
<table-count count="8"/>
<equation-count count="14"/>
<ref-count count="38"/>
<page-count count="14"/>
<word-count count="8138"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Precision Medicine</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>Within the dynamic sphere of social media, the persistent issue of cyberbullying demands inventive and robust solutions to ensure user safety and cultivate a secure digital environment. Recent insights from the &#x201C;Cyberbullying Statistics, Facts, and Trends (2023) with Charts&#x201D; (<xref ref-type="bibr" rid="ref1">1</xref>) underscore concerning statistics, revealing that over 61% of teens on social media have encountered online bullying related to their appearance, while 41% of adults have personally confronted harassment on social media. A thorough examination of cyberbullying rates among adolescents further underscores the gravity of the issue, with a study in England revealing an incidence of 17.9%, and research in Saudi Arabia reporting a prevalence of 20.97% (<xref ref-type="bibr" rid="ref2">2</xref>). Despite recognized correlations between socio-economic factors, environmental influences, mental health, and cyberbullying tendencies, there remains an unexplored dimension&#x2014;the creation of an online self-sufficient system to address cyberbullying and offer necessary guidance to identified victims and bullies.</p>
<p>As our digital interconnectedness expands, so too does the urgency to confront the challenges posed by malicious online behaviors. This paper proposes a novel approach to combat cyberbullying by integrating findings from cyberbullying statistics with innovative solutions. Our approach involves the fusion of two cutting-edge technologies: Blockchain and Federated Learning (FL) (<xref ref-type="bibr" rid="ref3">3</xref>). Blockchain, known for its decentralized nature and transaction integrity, serves as the foundation of our solution, while Federated Learning facilitates collaborative machine learning without compromising individual data privacy. Alloy Language is utilized for the formal modeling of social connections, with specific declarations defined by our novel algorithm shaping the foundation of our proposed methodology. The incorporation of Long Short-Term Memory (LSTM) and Deep Belief Networks (DBN) into our system architecture enables established relational checks as well as feature detection within the DBN layer. Recognizing the importance of user accessibility, we augment our approach with an eXplainable Artificial Intelligence (XAI) layer, which sits atop our integration of Deep Learning and Blockchain technologies, making the solution more understandable to users in real-world circumstances. In the dynamic scenario of online interactions, natural language processing with AI capabilities emerges as an important aspect in the study of Cyberbullying, this plays an important role in developing useful features textual data. With the growth in usage of social media communication and utilization of day to day activities, prevalence of NLP with AI capabilities to study and analyze human interactions, innate sentiments, and discourse patterns has become increasingly relevant. The availability of vast amounts of data and the development of NLP and AI capabilities are the main drivers which cause the surge in the field of Sentiment Analysis, Tone detection etc. (<xref ref-type="bibr" rid="ref4">4</xref>). The same is also used in fields such as information retrieval, topic modeling, sentiment analysis, and more. Cyberbullying has developed as a major issue in today&#x2019;s socially connected generation, with reference to the purposeful and repetitive use of digital communication by miscreants to harass, intimidate, or hurt individuals. Cyberbullying includes a wide range of damaging activities such as spreading rumors, publishing sexual or slanderous content, sending abusive communications, and participating in online hate speech. Individuals&#x2019; mental health, social interactions, and overall well-being are all negatively impacted by cyberbullying (<xref ref-type="bibr" rid="ref5">5</xref>).</p>
<p>The design is kept such that the proposed solution can be deployed using existing packaging and MLOps processes. The work explored in this document aims to contribute to the existing studies on detection and prevention of cyberbullying by proposing a novel approach and make online spaces safer. It combines three powerful technologies: federated learning, blockchain, and deep learning with natural language processing (NLP). Federated learning protects user privacy by training the cyberbullying detection model on individual devices without sharing the data itself. Blockchain ensures the security and tamper-proof nature of the training process. Deep learning and NLP enable the model to accurately identify cyberbullying content.</p>
<p>Through this Blackbox model powered by federated learning and NLP techniques, we develop a model that works primarily on two factors &#x2013; Preservation of Social Media User Privacy and increasing the accuracy of Cyberbullying detection. The work done in this paper works in line with objective of creating safer online spaces by detecting cyberbullying and hence giving a boost to the mental health of individuals in the digital era. Our study follows a well-defined federated training sequence of various blocks, that has been developed to implement both user privacy and high-speed block chain based deep learning methods, toward cyberbullying detection.</p>
<p>In this paper, we have made the following contributions:<list list-type="bullet">
<list-item>
<p>To propose a novel framework using Blockchain and Federated Learning based Cybersecurity Solution (BFL-CS) to handle cyberbullying in social media space.</p>
</list-item>
<list-item>
<p>To develop novel algorithms which works as a Hybrid Block Chain &#x0026; Federated Learning model for the prevention Cyber bullying solution.</p>
</list-item>
<list-item>
<p>To evaluate the proposed method with other deep learning-based methods, by using a dual layer deep learning architecture using LSTM and DBN techniques.</p>
</list-item>
<list-item>
<p>To assess the effectiveness of the work using metrics and visualization tools.</p>
</list-item>
</list></p>
<p>The paper has been organized as follows: Section 1 discusses the Introduction and contributions made, Section 2 highlights the previous researches done in the field. Section 3 mentions the detailed proposed framework and methodology. Section 4 presents the evaluation and discussion of the results and last section concludes with some future directions.</p>
</sec>
<sec id="sec2">
<label>2</label>
<title>Literature review</title>
<p>Muniyal el al. (<xref ref-type="bibr" rid="ref3">3</xref>) introduced Federated Learning [FL] as a procedure to secure sensitive user data across the process pipeline. The authors emphasize more toward the possibility of a security breach on a Cyberbully detection and prevention system when the same is based on a Central Server. In addition to this, the performance parameters of the proposed solution is shown only on a IID (Independent and Identically Distributed) dataset only. The solution developed is named as &#x201C;FedBully,&#x201D; which used NLP techniques such as sentence-embedding based classifier, Sentence-BERT (Bidirectional Encoder Representations from Transformers) to detect cyberbullying, incorporating the training procedure from federated learning. Iwendi et al. (<xref ref-type="bibr" rid="ref6">6</xref>) proposes a pure Deep Learning based solution for detection of Cyberbullying in Social Media. Advanced techniques like Bidirectional Long Short-Term Memory (BLSTM), Gated Recurrent Units (GRU), Long Short-Term Memory (LSTM), and Recurrent Neural Network (RNN) are used in ensemble to generate a higher accuracy &#x2013; AOC (Area Under the Curve) for the proposed solution. In addition to that, the solution also does a significant amount of text cleaning and tokenization efforts. The paper also explores a comparative analysis of various other deep learning methods and provides a qualitative result of each method with respective accuracies and process performances. Samee et al. (<xref ref-type="bibr" rid="ref2">2</xref>) showed detection of cyberbullying with federated learning. The work improved the identification of cyberbullying cases by offering a richer knowledge of the emotional context within communications by developing eight novel emotional elements retrieved from textual tweets. The use of privacy-preserving federated learning enabled collaborative cyberbullying detection, maintaining data privacy while encouraging collaboration across varied groups for a more scalable and successful method. Furthermore, similar to Iwendi et al. (<xref ref-type="bibr" rid="ref2">2</xref>) where the analysis done in the paper used a client selection strategy for overall model ensemble preparation which was purely based on statistical performance of the model, the output was desired to be more accurate. The paper showed that the BERT model used in Gohal et al. (<xref ref-type="bibr" rid="ref2">2</xref>) outperforms other traditional models such as CNN, DNN, and LSTM, that too with such low number of epochs, i.e., 200.</p>
<sec id="sec3">
<label>2.1</label>
<title>Research gap</title>
<p>Based on the literature review, we see that in previous research works on cyber-bullying detection and mitigation, a drawback that we constantly notice is the centralization of sensitive user data compared to social media for deep learning model training, highlighting a major privacy concern (<xref ref-type="bibr" rid="ref12">12</xref>). This disadvantage may also make the adoption of such systems problematic when applied to real-world applications, as consumers will be hesitant to provide data with systems that take no precautions to safeguard their data (<xref ref-type="bibr" rid="ref13">13</xref>). Furthermore, we show that traditional approaches frequently struggle to perform effectively due to a lack of different user behavior data and linguistic patterns. In our research, we effectively solve the above mentioned issues by combining federated learning with a secure block chain-based backend and alloy data modeling techniques. Federated learning uses a decentralized strategy to ensure that user data is handled and stored ensuring user privacy. Furthermore, the basic working of primary deep learning methods provides us with opportunity of continuous model tweaking, which, combined with other data security measures helps us in achieving our goal without giving away the third-party data security (<xref ref-type="bibr" rid="ref7">7</xref>). Our paper uses features of federated learning to handle these shortcomings of earlier methodologies, resulting in a ground-breaking approach to cyberbully identification that maintains the highest level of user information privacy and data security.</p>
</sec>
<sec id="sec4">
<label>2.2</label>
<title>Comparative study of systems proposed in earlier works</title>
<p>
<table-wrap position="anchor" id="tab1">
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">#</th>
<th align="left" valign="top">Paper title &#x0026; Ref No.</th>
<th align="left" valign="top">Advantages</th>
<th align="left" valign="top">Disadvantages</th>
<th align="left" valign="top">Techniques used</th>
<th align="left" valign="top">Dataset</th>
<th align="left" valign="top">Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">1</td>
<td align="left" valign="top">Shetty et al. (<xref ref-type="bibr" rid="ref3">3</xref>)</td>
<td align="left" valign="top">Masking of data at the before start of data preprocessing leading to higher data security</td>
<td align="left" valign="top">High run time, two fold increase in computational load on the system.</td>
<td align="left" valign="top">SBERT, Universal Sentence Encoders &#x2013;DAN, Universal Sentence Encoders &#x2013; Transformers</td>
<td align="left" valign="top">Data from Kaggle, Youtube, Twitter</td>
<td align="left" valign="top">97.12%</td>
</tr>
<tr>
<td align="left" valign="top">2</td>
<td align="left" valign="top">Fati et al. (<xref ref-type="bibr" rid="ref8">8</xref>)</td>
<td align="left" valign="top">Data pre processing is made a part of Deep Learning Methodologies, leading to a more holistic output.</td>
<td align="left" valign="top">Running NLP and DL together leads to a dependency on one layer for processing the other layer. So, a failure in the NLP layer can make the entire architecture crumble.</td>
<td align="left" valign="top">Continuous Bag of Words based Conv1DLSTM</td>
<td align="left" valign="top">Data from Kaggle</td>
<td align="left" valign="top">97.34%</td>
</tr>
<tr>
<td align="left" valign="top">3</td>
<td align="left" valign="top">Bruwaene et al. (<xref ref-type="bibr" rid="ref9">9</xref>)</td>
<td align="left" valign="top">Ensemble Deep learning method used to get take advantage of various model accuracies.</td>
<td align="left" valign="top">High run time and need to multiple steps of data preprocessing and encoding.</td>
<td align="left" valign="top">Multi-technique annotation and a ensemble of SVM, CNN &#x0026; XGBoost</td>
<td align="left" valign="top">VISR Dataset</td>
<td align="left" valign="top">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="top">4</td>
<td align="left" valign="top">Bozyigit et al. (<xref ref-type="bibr" rid="ref10">10</xref>)</td>
<td align="left" valign="top">Vanilla Artificial Neural Networks</td>
<td align="left" valign="top">Low accuracy of the model</td>
<td align="left" valign="top">Artificial Neural Networks</td>
<td align="left" valign="top">Twitter &#x2013; Hindi/Marathi</td>
<td align="left" valign="top">91%</td>
</tr>
<tr>
<td align="left" valign="top">5</td>
<td align="left" valign="top">Samee et al. (<xref ref-type="bibr" rid="ref11">11</xref>)</td>
<td align="left" valign="top">Federated Learning used with basic Machine Learning processes.</td>
<td align="left" valign="top">No emphasis is given on the security aspect of the deep learning layer.</td>
<td align="left" valign="top">FedBERT</td>
<td align="left" valign="top">Twitter</td>
<td align="left" valign="top">92.15%</td>
</tr>
</tbody>
</table>
</table-wrap>
</p>
</sec>
</sec>
<sec id="sec5">
<label>3</label>
<title>Proposed design methodology</title>
<p>This paper envisages novel method named Blockchain based Federated Learning based Cybersecurity Solution (BFL-CS) methodology to handle cyberbullying in social media space and its prevention (<xref ref-type="bibr" rid="ref14">14</xref>). In the approach defined in this study, a Federated Learning methodology is employed with methods such as a modified LSTM in tandem traditional DBN to improve on the statistical parameters of the model and the privacy security of the model. The LSTM has traditional parameters such as batch size, timesteps and input feature vectors. It is to be noted that the DBN model is used as per its usual implementation without any modifications.</p>
<p>The proposed methodology works on two layers of memory:<list list-type="order">
<list-item>
<p>A short-term memory (LSTM) that helps in generating blocks and federated learning nodes.</p>
</list-item>
<list-item>
<p>A long term memory (DBN) that keeps the learning from federated learning nodes and propagates it across the model during future epochs.</p>
</list-item>
</list></p>
<p>In this way, the model achieves faster run time due to actively forgetting information that does not value the model in the long run. And also generates highly accurate results from its long memory model implementation.</p>
<p>In a classical Ensemble implementation, the accuracy of two or models is combined to get a unified result. However, in our model, we have two DL models working together on the same data but at different stages to generate a result.</p>
<p>The architecture given below shows the complete data flow and working of the proposed design (<xref ref-type="fig" rid="fig1">Figure 1</xref>).</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Overall structure of the BFL-CS model.</p>
</caption>
<graphic xlink:href="fmed-11-1379211-g001.tif"/>
</fig>
<p>The framework model is listed and explained in the following steps.</p>
<sec id="sec6">
<label>3.1</label>
<title>Data warehousing</title>
<p>In our system architecture, the data is mainly collected from Social media platforms using Web Scrapping APIs. This scrapping is running on a preset scheduler to collect information at regular intervals of time and new data is added to the existing information set (<xref ref-type="bibr" rid="ref15">15</xref>). In our model, data is stored in PostgreSQL. Currently the solution is hosted locally, however, as the complexity and size of data increases, we plan on scaling the solution toward AWS S3 with 3 AZs.</p>
</sec>
<sec id="sec7">
<label>3.2</label>
<title>Data pre-processing</title>
<p>At this stage the data is made ready for ingesting into the model for obtaining desired performance. In the signal, it clears unnecessary effects, prevents issues, and improves accuracy. In this stage dataset namely the &#x201C;BFL-CS dataset&#x201D; and operations such as data cleaning, normalization and development of data stream is done.</p>
</sec>
<sec id="sec8">
<label>3.3</label>
<title>Data cleaning and normalization</title>
<p>All the blank value fields and social media comments which clean word stems are not established are deleted from the database to prevent any kind of influence on the model due to high level outliers. Also, in order to eliminate the influences presented in the dissimilar scale features is executed in this process which reduces the model&#x2019;s run time.</p>
<p>In many cases in the data science space, data scientist use the method of min-max normalization process. However, this method has its own problems, since this is rather a feature scaling method &#x2013; this normalization significantly lowers the biasness of the model. While a lot of cases see biasness as a vice, in our case the biasness of the model actually points us toward the habitual bullies (<xref ref-type="bibr" rid="ref16">16</xref>). Therefore, in our model we apply a rather lesser known normalization process which creates a correlation between the dataset and the standard deviation of the dataset.</p>
<p>We use the <inline-formula>
<mml:math id="M1">
<mml:mi>Z</mml:mi>
</mml:math>
</inline-formula>-Score normalization procedure to normalize the data and scaling it as per the requirement of the proposed model shown in <xref ref-type="disp-formula" rid="EQ1">Equation 1</xref>,<disp-formula id="EQ1">
<label>(1)</label>
<mml:math id="M2">
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo>#</mml:mo>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x00AF;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>&#x03C3;</mml:mi>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>Here, <inline-formula>
<mml:math id="M3">
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mo>#</mml:mo>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> is the Z-normalized value. <inline-formula>
<mml:math id="M4">
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x00AF;</mml:mo>
</mml:mover>
</mml:math>
</inline-formula> is the average value/mean and <inline-formula>
<mml:math id="M5">
<mml:mi>&#x03C3;</mml:mi>
</mml:math>
</inline-formula> is the standard deviation of the data. This normalization is used for all numeric data straight away and for the non-numeric data, the data is first undergone a one-hot encoding or normal encoding for normalization process.</p>
</sec>
<sec id="sec9">
<label>3.4</label>
<title>Data stream for real time data publication to base database</title>
<p>This step involves a sophisticated integration of advanced data streaming and storage methodologies, as this step is very crucial in sensing repeated offenders and sensing their patterns. The various concepts incorporated in the model are as follows:</p>
<p>Event-Driven Architecture is a process that enables real-time processing by triggering and responding to events as they occur via web hooks, making it instrumental in capturing and handling data streams in real time. Kafka facilitates the building of real-time data pipelines and streaming applications. The process of collecting and importing real-time data streams into the base database for immediate storage and analysis. Utilizing messaging protocols (such as AMQP and XMPP) that minimize the time it takes for data to travel from source to destination, ensuring low-latency data delivery.</p>
</sec>
<sec id="sec10">
<label>3.5</label>
<title>API Integration</title>
<p>Representational State Transfer APIs follow a set of architectural principles for designing networked applications, providing a standardized way for systems to communicate (<xref ref-type="bibr" rid="ref17">17</xref>). Webhooks enable real-time communication between systems by triggering events in one system based on actions or updates in another, enhancing the responsiveness of API integrations. OAuth is a protocol for secure API authorization, allowing applications to access resources on behalf of a user with limited permissions. A centralized entry point that manages and optimizes API requests, ensuring scalability, security, and efficient data flow between systems. The entire design is parametric in nature without any hardcoded values. These parameters will be controlled by API driven microservices.</p>
</sec>
<sec id="sec11">
<label>3.6</label>
<title>Data broadcaster to blockchain</title>
<p>At this stage a data broadcaster is developed which pushed the information to the blockchain, marrying the real-time dissemination of information with the immutable, decentralized characteristics of blockchain technology.</p>
<p>Key Components and Technical Processes involved at this stage. The deployment of a specialized protocol, such as DBP (hybrid ICMP &#x0026; POP3), facilitates the secure and efficient real-time broadcasting of diverse data types onto a blockchain network. Decentralized Ledger Technology ensures a decentralized and distributed ledger, eliminating single points of failure and fortifying data availability across a network of nodes. The integration of a sophisticated execution engine ensures the seamless automation and enforcement of predefined rules embedded within smart contracts associated with the broadcasted data. The utilization of cryptographic hash functions, which is SHA-512 (specialized for our application), safeguards the immutability of data on the blockchain, rendering each block impervious to unauthorized modifications. The consensus algorithm, like Proof of Work (PoW) or Proof of Stake (PoS), orchestrates the agreement among network nodes, validating transactions and solidifying the security of the data broadcasting process. Blockchain&#x2019;s inherent transparency provides an audit trail that allows participants to scrutinize the origin, journey, and modifications (if any) made to the broadcasted data, fostering accountability and trust. The comprehensive security architecture ensures the resilience of the data during transmission and storage, encompassing encryption, public-key infrastructure (PKI), and other robust security measures.</p>
</sec>
<sec id="sec12">
<label>3.7</label>
<title>Blockchain administration system</title>
<p>This system tracks that individual changes are meticulously recorded within blocks, contributing to a transparent and tamper-resistant ledger with time &#x0026; pseudo random number based identification module. The system allows for individual data entries to be added to the blockchain, with each piece of information forming a block in the distributed ledger. This decentralization eliminates the need for a central authority, enhancing transparency and reducing the risk of single points of failure (<xref ref-type="bibr" rid="ref18">18</xref>). The heart of blockchain&#x2019;s power lies in its unchangeability. Information in a block, once added, is cryptographically secured, making it virtually impossible to modify or erase. This feature guarantees the integrity of the recorded data throughout its entire existence. Every block in the blockchain is timestamped, providing an accurate record of when each data addition occurred. This temporal dimension adds another layer of transparency and traceability to the administration system. Smart contracts, self-executing contracts with predefined rules, can be incorporated to automate specific administrative functions. This enhances efficiency and reduces the need for manual intervention in routine processes. The administration of the blockchain is distributed across network nodes, eliminating the need for a centralized administrator. This decentralized governance model aligns with the principles of autonomy and inclusivity.</p>
</sec>
<sec id="sec13">
<label>3.8</label>
<title>Deep learning engine</title>
<p>The deep learning engine that we are using in our architecture has two methods built in it. We first run classifications using LSTM and then we run another classification using Deep Belief Networks which then throws out the result.</p>
<p>Long Short-Term Memory (LSTM) is modified process of recurrent neural network (RNN) architecture designed to address the diminishing gradient situations in usual RNNs, enabling more effective modeling of sequential data. The key innovation of LSTMs lies in their memory cells, which allow them to capture and store information over long sequences.</p>
<p>Mathematically, as per theory, the following is to be noted in terms of LSTM model:</p>
<p>The base model contains of three units&#x2014;the input unit <inline-formula>
<mml:math id="M6">
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, forget unit <inline-formula>
<mml:math id="M7">
<mml:mi>f</mml:mi>
</mml:math>
</inline-formula>, and output unit <inline-formula>
<mml:math id="M8">
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>In addition to that, data state is stored in &#x2013; cell state <inline-formula>
<mml:math id="M9">
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>The input unit handles the process flow of new information into the cell,</p>
<p>The forget unit controls the retention of existing information,</p>
<p>and the output unit handles the knowledge to be output from the cell.</p>
<p>The computations within an LSTM cell are governed by the following <xref ref-type="disp-formula" rid="EQ2">Equations 2</xref>&#x2013;<xref ref-type="disp-formula" rid="EQ11">11</xref>:<disp-formula id="EQ2">
<label>(2)</label>
<mml:math id="M10">
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>p</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>U</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>W</mml:mi>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>i</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mi>W</mml:mi>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula><disp-formula id="EQ3">
<label>(3)</label>
<mml:math id="M11">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>U</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>W</mml:mi>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>i</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula><disp-formula id="EQ4">
<label>(4)</label>
<mml:math id="M12">
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>p</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>U</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>W</mml:mi>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>i</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mi>o</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mi>o</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula><disp-formula id="EQ5">
<label>(5)</label>
<mml:math id="M13">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>U</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>W</mml:mi>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>i</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula><disp-formula id="EQ6">
<label>(6)</label>
<mml:math id="M14">
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>s</mml:mi>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mspace width="thickmathspace"/>
<mml:mi>X</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>i</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mspace width="thickmathspace"/>
<mml:mi>X</mml:mi>
<mml:mspace width="thickmathspace"/>
<mml:msub>
<mml:mi>g</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula><disp-formula id="EQ7">
<label>(7)</label>
<mml:math id="M15">
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x2299;</mml:mo>
<mml:mi>tan</mml:mi>
<mml:mi>h</mml:mi>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>Here, <inline-formula>
<mml:math id="M16">
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the input at time <inline-formula>
<mml:math id="M17">
<mml:mi>t</mml:mi>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M18">
<mml:mi>h</mml:mi>
</mml:math>
</inline-formula> is the hidden state at time <inline-formula>
<mml:math id="M19">
<mml:mi>t</mml:mi>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M20">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>U</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> denotes the sigmoid activation function, and <inline-formula>
<mml:math id="M21">
<mml:mi>X</mml:mi>
</mml:math>
</inline-formula> represents element-wise multiplication. The weight multidimensional matrix <inline-formula>
<mml:math id="M22">
<mml:mrow>
<mml:mi>W</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and bias column matrix <inline-formula>
<mml:math id="M23">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula> are parameters learned during the training process. The LSTM&#x2019;s ability to selectively retain and utilize information over varying time intervals makes it well-suited for tasks involving sequential and time-series data.</p>
<p>In addition to LSTMs, the model proposed in the paper also used Deep Belief Networks (DBNs) in tandem.</p>
<p>Deep Belief Networks (DBNs) are a type of generative neural network architecture composed of multiple layers of stochastic, latent variables. DBNs consist of two main components: a stack of Restricted Boltzmann Machines (RBMs) and a top layer that serves as a discriminative model. The hidden layer of each RBM serves as the visible layer for the next, creating a hierarchical structure. The mathematical formulation of DBNs involves the activation probabilities of the hidden and visible layers, weight matrices, and biases. Let <inline-formula>
<mml:math id="M24">
<mml:mi>h</mml:mi>
</mml:math>
</inline-formula> represent the hidden layer and <inline-formula>
<mml:math id="M25">
<mml:mi>v</mml:mi>
</mml:math>
</inline-formula> the visible layer. The activation probabilities <inline-formula>
<mml:math id="M26">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> for hidden unit <inline-formula>
<mml:math id="M27">
<mml:mi>j</mml:mi>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M28">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> for visible <inline-formula>
<mml:math id="M29">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula> unit are given by:<disp-formula id="EQ8">
<label>(8)</label>
<mml:math id="M30">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>&#x03C3;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula><disp-formula id="EQ9">
<label>(9)</label>
<mml:math id="M31">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>&#x03C3;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>Now we see that the algorithm that we are using inside our Deep Learning engine is a mixture of two base models. Therefore, the integration of Long Short-Term Memory (LSTM) and Deep Belief Networks (DBN) in a unified system leverages the strengths of both models enhances the modeling accuracies and generation of sequential data. In this hybrid system, the LSTM component helps in capturing long-term dependencies and patterns in sequential information, while the DBN component contributes to hierarchical feature learning and generation (<xref ref-type="bibr" rid="ref19">19</xref>). Mathematically, the output of the LSTM (<inline-formula>
<mml:math id="M32">
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) and DBN (<inline-formula>
<mml:math id="M33">
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) components can be combined to produce the final system output (<inline-formula>
<mml:math id="M34">
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) as follows (<xref ref-type="bibr" rid="ref20">20</xref>):<disp-formula id="EQ10">
<label>(10)</label>
<mml:math id="M35">
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
<mml:mo>.</mml:mo>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>.</mml:mo>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula></p>
<p>Here, <inline-formula>
<mml:math id="M36">
<mml:mi>&#x03B1;</mml:mi>
</mml:math>
</inline-formula> is a weighting parameter that determines the influence of each component on the final output. This hybrid approach aims to exploit the complementary strengths of LSTM and DBN, providing a more robust and expressive model for tasks such as sequence generation, where capturing both short-term and long-term dependencies is crucial. The choice of <inline-formula>
<mml:math id="M37">
<mml:mi>&#x03B1;</mml:mi>
</mml:math>
</inline-formula> allows for flexible adjustment of the contribution of each component, enabling fine-tuning based on specific task requirements and data characteristics.</p>
</sec>
<sec id="sec14">
<label>3.9</label>
<title>Mathematical model</title>
<p>Consider the continuous-time outputs <inline-formula>
<mml:math id="M38">
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M39">
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> from the LSTM and DBN components, respectively. The continuous-time final output <inline-formula>
<mml:math id="M40">
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is expressed as an integral over time, with a parameterized blending factor <inline-formula>
<mml:math id="M41">
<mml:mrow>
<mml:mi>&#x03B1;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> denoting the time-varying contribution of each component shown in <xref ref-type="disp-formula" rid="EQ11">Equations 11</xref>&#x2013;<xref ref-type="disp-formula" rid="EQ14">14</xref>:<disp-formula id="EQ11">
<label>(11)</label>
<mml:math id="M42">
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mstyle displaystyle="true">
<mml:mrow>
<mml:msubsup>
<mml:mo>&#x222B;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mi>t</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mi>&#x03B1;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>&#x03C4;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>.</mml:mo>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>&#x03C4;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>&#x03C4;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>.</mml:mo>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>&#x03C4;</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>&#x03C4;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
<mml:mspace width="thickmathspace"/>
</mml:mrow>
</mml:math>
</disp-formula>This integral formulation captures the continuous evolution of the system&#x2019;s output over time, reflecting the dynamic nature of the blending process.</p>
<p>The objective function for continuous-time training is defined as the integral of the squared error between the system&#x2019;s output <inline-formula>
<mml:math id="M43">
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and the target output <inline-formula>
<mml:math id="M44">
<mml:mrow>
<mml:mi>Y</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>:<disp-formula id="EQ12">
<label>(12)</label>
<mml:math id="M45">
<mml:mrow>
<mml:mi>J</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mo>&#x2205;</mml:mo>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:msub>
<mml:mo>&#x2205;</mml:mo>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x222B;</mml:mo>
</mml:mstyle>
<mml:mn>0</mml:mn>
<mml:mi>T</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>O</mml:mi>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>I</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>L</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>Y</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>T</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>The gradients with respect to the parameters (<inline-formula>
<mml:math id="M46">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:mi>J</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:msub>
<mml:mo>&#x2205;</mml:mo>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M47">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:mi>J</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:msub>
<mml:mo>&#x2205;</mml:mo>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</inline-formula>) guide the continuous-time parameter updates during the training process.</p>
<p>The continuous-time optimization involves adjusting the parameters through an integral-based gradient descent approach:<disp-formula id="EQ13">
<label>(13)</label>
<mml:math id="M48">
<mml:mrow>
<mml:msubsup>
<mml:mo>&#x2205;</mml:mo>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>&#x0394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mo>&#x2205;</mml:mo>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03B7;</mml:mi>
<mml:mstyle displaystyle="true">
<mml:mrow>
<mml:msubsup>
<mml:mo>&#x222B;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>&#x0394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:mi>J</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:msub>
<mml:mo>&#x2205;</mml:mo>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>T</mml:mi>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mi>d</mml:mi>
<mml:mi>&#x03C4;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</disp-formula><disp-formula id="EQ14">
<label>(14)</label>
<mml:math id="M49">
<mml:mrow>
<mml:msubsup>
<mml:mo>&#x2205;</mml:mo>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>&#x0394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mo>&#x2205;</mml:mo>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03B7;</mml:mi>
<mml:mstyle displaystyle="true">
<mml:mrow>
<mml:msubsup>
<mml:mo>&#x222B;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>&#x0394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:mi>J</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:msub>
<mml:mo>&#x2205;</mml:mo>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mi>B</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mi>d</mml:mi>
<mml:mi>&#x03C4;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
<mml:mspace width="thickmathspace"/>
</mml:mrow>
</mml:math>
</disp-formula>Here, <inline-formula>
<mml:math id="M51">
<mml:mi>&#x03B7;</mml:mi>
</mml:math>
</inline-formula> represents the learning rate, and <inline-formula>
<mml:math id="M52">
<mml:mrow>
<mml:mi>&#x0394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> signifies the time step in the continuous-time parameter space during each iteration.</p>
<p>From the above mathematical model, we define a base algorithm on directions of which the entire architecture is built, the algorithm is as follows:</p>
<sec id="sec15">
<label>ALGORITHM 1</label>
<title>: Deep learning engine of BFL-CS.</title>
<p>
<table-wrap position="anchor" id="tab2">
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td align="left" valign="top">
<inline-graphic xlink:href="fmed-11-1379211-i001.tif"/>
</td>
</tr>
</tbody>
</table>
</table-wrap>
</p>
</sec>
</sec>
<sec id="sec16">
<label>3.10</label>
<title>Federated learning node</title>
<p>In the context of the proposed model on combating cyberbullying through a decentralized defense system, Federated Learning (FL) emerges as a main technology backbone of the solution. By distributing the model training process across individual devices, FL ensures that sensitive user data, integral to understanding and mitigating cyberbullying, remains localized. The use of Federated learning is used to handle separate learning activities across the data. This step has actually made the system faster by running complex algorithms across small scale datasets with limited features.</p>
<p>This decentralized approach mitigates privacy concerns associated with centralization, a critical consideration in the realm of cyberbullying detection. Moreover, FL&#x2019;s iterative model refinement, conducted collaboratively while preserving individual data, holds significant promise in enhancing the system&#x2019;s understanding of evolving cyberbullying patterns. The incorporation of FL in the proposed system aligns with the broader goal of empowering users and institutions to actively contribute to the development of robust cyberbullying detection models, fostering a collective defense against online harassment while respecting individual privacy. The <xref ref-type="sec" rid="sec15">Algorithms 1</xref>, <xref ref-type="sec" rid="sec17">2</xref> for the complete model is given below:</p>
<sec id="sec17">
<label>ALGORITHM 2</label>
<title>: Complete BFL-CS Model.</title>
<p>
<table-wrap position="anchor" id="tab3">
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td align="left" valign="top">
<inline-graphic xlink:href="fmed-11-1379211-i002.tif"/>
</td>
</tr>
</tbody>
</table>
</table-wrap>
</p>
<p>This code implements a secure federated learning system for training a combined LSTM and DBN model. In each round, clients are chosen to participate. They receive a global model, train their local versions on their own data, and calculate updates. To protect privacy, these updates can be masked with noise or securely combined before being sent back to a central server. The server aggregates the updates and improves the global model. Finally, for tamper-proof tracking, each improved model is recorded on a blockchain ledger. This process repeats for multiple rounds, resulting in a collaboratively trained model without ever sharing the raw data from individual clients.</p>
</sec>
</sec>
<sec id="sec18">
<label>3.11</label>
<title>Result node, feedback loop: to deep learning engine and corrective data loop: to social media</title>
<p>In the proposed system, the culmination of federated learning, LSTM, DBN, data collection, preprocessing, and blockchain management converges at the result node (<xref ref-type="bibr" rid="ref21">21</xref>).</p>
<p>This node serves as the repository for the outcomes of the intricate processes conducted during each communication round. Subsequently, these results are broadcasted into the system feedback loop, initiating a sequence of actions for system parameter optimization. The system feedback loop strategically utilizes the obtained results to refine global model parameters, enhancing the overall effectiveness of the cyberbullying detection system. Simultaneously, the results are channeled into the social media loop, triggering actions against systemic bullies. This dynamic loop interfaces with social media platforms to deploy measures aimed at curtailing cyberbullying activities. The feedback-driven optimization process and decisive actions against online aggressors collectively contribute to the robustness and adaptability of the decentralized defense system, fostering a safer and more secure online environment.</p>
</sec>
<sec id="sec19">
<label>3.12</label>
<title>Alloy modeling</title>
<p>In this paper, Alloy language helps in formalizing and modeling the intricate social connections within the context of cyberbullying detection (<xref ref-type="bibr" rid="ref22">22</xref>). Alloy, a declarative modeling language, provides a robust framework for expressing and analyzing complex relationships between entities in a system. Specifically, we employ Alloy language to create formalized declarations and constraints that define the features and dynamics of social interactions within the cyber realm (<xref ref-type="bibr" rid="ref23">23</xref>, <xref ref-type="bibr" rid="ref24">24</xref>). We construct a formal model that captures the essential features and constraints relevant to cyberbullying scenarios. This model helps in shaping the foundation of our proposed methodology, influencing the design of our novel algorithm. Alloy&#x2019;s ability to articulate intricate relationships and constraints enhances the precision of our modeling efforts, contributing to the overall effectiveness of the decentralized defense system against cyberbullying.</p>
</sec>
</sec>
<sec id="sec20">
<label>4</label>
<title>Experimental results and discussions</title>
<p>The working of the BFL-CS method for detection and prevention of Cyberbullying in social media is tested with the Federated Deep Learning Processes which employ various methods.</p>
<p>The method is tested against various measures such as Recall, F1, Accuracy etc., and the results are compared with existing methods such as Vanilla RNN (v-RNN), Deep Reinforcement Learning (DRL), Residual Networks (ResNet) and Capsule Networks (CapNets). It is to be noted that the design is specifically made for English language analysis, it is seen that with appropriate data training, the results on various regional languages also show same results as shown by Pawar et al. (<xref ref-type="bibr" rid="ref25">25</xref>) and Haider et al. (<xref ref-type="bibr" rid="ref26">26</xref>).</p>
<sec id="sec21">
<label>4.1</label>
<title>Experimental setup</title>
<p>In this paper, the proposed methodology is implemented using Python and R. Pre-built packages are used for the implementation (<xref ref-type="bibr" rid="ref27">27</xref>).</p>
<p>Details of the experimental setup along with the details of packages used are as follows:<table-wrap position="anchor" id="tab4">
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">#</th>
<th align="left" valign="top">Particular</th>
<th align="left" valign="top">Specification/details</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">1</td>
<td align="left" valign="top">Processor</td>
<td align="left" valign="top">I7-14700K</td>
</tr>
<tr>
<td align="left" valign="top">2</td>
<td align="left" valign="top">RAM</td>
<td align="left" valign="top">8 GB</td>
</tr>
<tr>
<td align="left" valign="top">3</td>
<td align="left" valign="top">Operating Clock Frequency</td>
<td align="left" valign="top">3.6 GHz</td>
</tr>
<tr>
<td align="left" valign="top">4</td>
<td align="left" valign="top">IDE (Python)</td>
<td align="left" valign="top">PyCharm</td>
</tr>
<tr>
<td align="left" valign="top">5</td>
<td align="left" valign="top">IDE (R)</td>
<td align="left" valign="top">R Studio</td>
</tr>
<tr>
<td align="left" valign="top">6</td>
<td align="left" valign="top">Packages (Python)</td>
<td align="left" valign="top">TensorFlow, Caffe</td>
</tr>
<tr>
<td align="left" valign="top">7</td>
<td align="left" valign="top">Packages (R)</td>
<td align="left" valign="top">TensorFlow, H<sub>2</sub>O</td>
</tr>
</tbody>
</table>
</table-wrap></p>
</sec>
<sec id="sec22">
<label>4.2</label>
<title>Programming setup parameters</title>
<p>The performance of the proposed methodology is tested/implemented using hardware and software of the following specifications.<table-wrap position="anchor" id="tab5">
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">#</th>
<th align="left" valign="top">Parameters</th>
<th align="left" valign="top">Specification/details</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">1</td>
<td align="left" valign="top">Training Epochs per Run</td>
<td align="left" valign="top">1,00,000</td>
</tr>
<tr>
<td align="left" valign="top">2</td>
<td align="left" valign="top">Dataset batch size</td>
<td align="left" valign="top">300</td>
</tr>
<tr>
<td align="left" valign="top">3</td>
<td align="left" valign="top">Learning Rate</td>
<td align="left" valign="top">0.001</td>
</tr>
<tr>
<td align="left" valign="top">4</td>
<td align="left" valign="top">Activation Function</td>
<td align="left" valign="top">Sigmoid</td>
</tr>
<tr>
<td align="left" valign="top">5</td>
<td align="left" valign="top">No. of Hidden Units</td>
<td align="left" valign="top">50</td>
</tr>
<tr>
<td align="left" valign="top">6</td>
<td align="left" valign="top">No. of Neurons Per Layer</td>
<td align="left" valign="top">10</td>
</tr>
<tr>
<td align="left" valign="top">7</td>
<td align="left" valign="top">Drop Out Rate</td>
<td align="left" valign="top">0.1</td>
</tr>
<tr>
<td align="left" valign="top">8</td>
<td align="left" valign="top">Loss Function</td>
<td align="left" valign="top">MSE</td>
</tr>
</tbody>
</table>
</table-wrap></p>
<p>In this research, Mean Squared Error (MSE) serves as the error function, while the RELU activation function is employed.</p>
<p>The rate of learning is set to 0.001, with a bundle size of 300 and a dropout rate of 0.1. To enhance the performance of the BFL-CS method, a Gradient-based target optimizer is applied, as illustrated in Eqs. 12&#x2013;<inline-formula>
<mml:math id="M53">
<mml:mrow>
<mml:mn>14</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, for hyperparameter optimization in this study (<xref ref-type="bibr" rid="ref28">28</xref>). Another important aspect is that the data is purely textual in nature (<xref ref-type="bibr" rid="ref29">29</xref>).</p>
</sec>
<sec id="sec23">
<label>4.3</label>
<title>Dataset description</title>
<p>In the paper, we have utilized the dataset of Cyberbullying which is available on Kaggle by Sahane et al. (<xref ref-type="bibr" rid="ref30">30</xref>) &#x0026; KLEJ (<italic>Kompleksowa Lista Ewaluacji J&#x0119;zykowych</italic>) (<xref ref-type="bibr" rid="ref31">31</xref>) to implement the BFL-CS method for detection of Cyberbullying.</p>
<p>There 48,000 data points are that we have collected from both (<xref ref-type="bibr" rid="ref30">30</xref>, <xref ref-type="bibr" rid="ref31">31</xref>). The description is given below:<table-wrap position="anchor" id="tab6">
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">#</th>
<th align="center" valign="top">Description</th>
<th align="left" valign="top">Detail</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Data Source</td>
<td align="center" valign="top">(<xref ref-type="bibr" rid="ref28">28</xref>, <xref ref-type="bibr" rid="ref29">29</xref>)</td>
<td align="left" valign="top">Available online</td>
</tr>
<tr>
<td align="left" valign="top">No. of Data Points</td>
<td align="center" valign="top">48,000</td>
<td align="left" valign="top">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="top">No. of Columns</td>
<td align="center" valign="top">4</td>
<td align="left" valign="top">Source of Data [twitter, youtube, tumblr], Tweet [40 Char], Date Time [DDMMYYYY HHMMSS], Location [Country]</td>
</tr>
</tbody>
</table>
</table-wrap></p>
</sec>
<sec id="sec24">
<label>4.4</label>
<title>Evaluation measures</title>
<p>The performance of the proposed method for Cyber bullying is evaluated through evaluation statistics such as Recall, Accuracy, Specificity, F1-score, etc. (<xref ref-type="bibr" rid="ref32">32</xref>). The performance evaluation of these metrics is based on the mathematical expressions mentioned below.</p>
<p><italic>Accuracy:</italic> This is the measure that measures the efficacy of the model with respect to correct classification of data-points on Cyberbullying scope.</p>
<p><italic>Precision:</italic> This is the measure that shows the overall consistency of the model and shows how many instances does the model provide accurate classifications (<xref ref-type="bibr" rid="ref12">12</xref>).</p>
<p><italic>Recall:</italic> This measure shows the number of positive values that are measured on a random basis from the total number of positive classifications feedback (<xref ref-type="bibr" rid="ref13">13</xref>).</p>
<p><italic>F1-score:</italic> This is a derived value which is the mixture of Recall and Precision &#x2013; basically the Harmonic mean of both these functions (<xref ref-type="bibr" rid="ref33">33</xref>).</p>
<p><italic>Specificity:</italic> This is again a very simple measure which sort of is the opposite of precision. This is the total negative hits of the model out of the total negative values (<xref ref-type="bibr" rid="ref34">34</xref>).</p>
</sec>
<sec id="sec25">
<label>4.5</label>
<title>Performance analysis</title>
<p>The statistical performance evaluation of the proposed model for detection and prevention of Cyberbullying in social media is tested with the Federated Deep Learning Processes which employ various methods.</p>
<p>The BFL-CS method is evaluated with various evaluation measures against existing methods such as Vanilla RNN (v-RNN), Deep Reinforcement Learning (DRL), Residual Networks (ResNet) and Capsule Networks (CapNets) (<xref ref-type="bibr" rid="ref27">27</xref>, <xref ref-type="bibr" rid="ref35">35</xref>). From <xref ref-type="fig" rid="fig2">Figures 2</xref>&#x2013;<xref ref-type="fig" rid="fig6">6</xref>, the performance of various methods as mentioned above are compared with respect to the BFL-CS. It is pertinent to note that the results are with respect to the overall accuracy of detection (<xref ref-type="bibr" rid="ref36">36</xref>).</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Comparison of accuracy.</p>
</caption>
<graphic xlink:href="fmed-11-1379211-g002.tif"/>
</fig>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Comparison of precision.</p>
</caption>
<graphic xlink:href="fmed-11-1379211-g003.tif"/>
</fig>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Comparison of recall measure.</p>
</caption>
<graphic xlink:href="fmed-11-1379211-g004.tif"/>
</fig>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Comparison of F1-score.</p>
</caption>
<graphic xlink:href="fmed-11-1379211-g005.tif"/>
</fig>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p>Graphical representation of specificity analysis.</p>
</caption>
<graphic xlink:href="fmed-11-1379211-g006.tif"/>
</fig>
<p>The plot shown in <xref ref-type="fig" rid="fig2">Figures 2</xref>, <xref ref-type="fig" rid="fig3">3</xref> illustrates the accuracy and precision of the BFLCS method in comparison to other known models. Impressively, BFLCS method has managed a remarkable accuracy of 98.92%. In contrast, established methods like v-RNN, DRL, ResNet, and CapNet demonstrated lower accuracies, recording values of 93.21, 96.43, 95.38, and 97.20%, respectively. Furthermore, examining precision, the BFL-CS method excels with a notable precision score of 97.91%.</p>
<p>We see that the system shows that it has achieved a high recall of 97.61% while the existing methods show much less recalls.</p>
<p><xref ref-type="fig" rid="fig5">Figure 5</xref> represents the graphical analysis to illustrate the F1-score of the BFL-CS method and the existing methods and again the superiority of the proposed solution.</p>
<p>The proposed methodology achieved high specificity of 97.55% while the existing methods obtained low specificity of 96.37, 95.61, 94.53, and 94.16%, respectively (<xref ref-type="fig" rid="fig7">Figures 7</xref>, <xref ref-type="fig" rid="fig8">8</xref>).</p>
<fig position="float" id="fig7">
<label>Figure 7</label>
<caption>
<p>AUC-ROC plots.</p>
</caption>
<graphic xlink:href="fmed-11-1379211-g007.tif"/>
</fig>
<fig position="float" id="fig8">
<label>Figure 8</label>
<caption>
<p>Comparison of run time.</p>
</caption>
<graphic xlink:href="fmed-11-1379211-g008.tif"/>
</fig>
<p>In above plot shows the area under curve of the proposed methodology. The proposed solution method showed a higher ROC of 0.9812 while the existing models such as vRNN, DRL, ResNet, and CapNet obtained a low AUC-ROC of 0.9691, 0.9592, 0.9494 and 0.9576, respectively.</p>
<p>We have tabulated the comparison of various statistical parameters of the proposed solution and the existing models such as vRNN, DRL, ResNet and CapNet. The details of our analysis are given (<xref ref-type="table" rid="tab7">Table 1</xref>).</p>
<table-wrap position="float" id="tab7">
<label>Table 1</label>
<caption>
<p>Tabulation of statistical performance measure of various laid down processes against the proposed methodology.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Methods</th>
<th align="center" valign="top">Accuracy (%)</th>
<th align="center" valign="top">Precision (%)</th>
<th align="center" valign="top">Recall (%)</th>
<th align="center" valign="top">F1-score (%)</th>
<th align="center" valign="top">Specificity (%)</th>
<th align="center" valign="top">AUC-ROC</th>
<th align="center" valign="top">Computational Time (seconds)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">BFL-CS</td>
<td align="center" valign="middle">98.92</td>
<td align="center" valign="middle">97.91</td>
<td align="center" valign="middle">97.82</td>
<td align="center" valign="middle">97.86</td>
<td align="center" valign="middle">97.85</td>
<td align="center" valign="middle">0.9812</td>
<td align="center" valign="middle">16</td>
</tr>
<tr>
<td align="left" valign="middle">v-RNN</td>
<td align="center" valign="middle">93.21</td>
<td align="center" valign="middle">95.12</td>
<td align="center" valign="middle">96.58</td>
<td align="center" valign="middle">96.42</td>
<td align="center" valign="middle">96.37</td>
<td align="center" valign="middle">0.9721</td>
<td align="center" valign="middle">19</td>
</tr>
<tr>
<td align="left" valign="middle">DRL</td>
<td align="center" valign="middle">96.43</td>
<td align="center" valign="middle">96.17</td>
<td align="center" valign="middle">96.13</td>
<td align="center" valign="middle">96.07</td>
<td align="center" valign="middle">95.61</td>
<td align="center" valign="middle">0.9632</td>
<td align="center" valign="middle">22</td>
</tr>
<tr>
<td align="left" valign="middle">ResNet</td>
<td align="center" valign="middle">95.38</td>
<td align="center" valign="middle">95.34</td>
<td align="center" valign="middle">95.41</td>
<td align="center" valign="middle">95.34</td>
<td align="center" valign="middle">94.53</td>
<td align="center" valign="middle">0.9574</td>
<td align="center" valign="middle">27</td>
</tr>
<tr>
<td align="left" valign="middle">CapNet</td>
<td align="center" valign="middle">97.2</td>
<td align="center" valign="middle">95.12</td>
<td align="center" valign="middle">94.79</td>
<td align="center" valign="middle">94.73</td>
<td align="center" valign="middle">94.16</td>
<td align="center" valign="middle">0.9526</td>
<td align="center" valign="middle">33</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>BFL-CS achieved the highest accuracy (98.92%) and AUC-ROC (0.9812), indicating that it correctly classified the most data points and has the best ability to distinguish between positive and negative classes. However, it also has the second highest computational time (16&#x2009;s).</p>
<p>v-RNN, DRL, and ResNet all have similar performance in terms of accuracy (around 95&#x2013;96%) and computational time (around 20&#x2009;s). They also have good precision, recall, and F1-score, which means they are good at identifying both positive and negative cases correctly. CapNet has a slightly lower accuracy (97.2%) and AUC-ROC (0.9526) compared to the other methods, but it has the highest computational time (33&#x2009;s). This suggests that CapNet may be less efficient than the other methods, even though it has a good overall performance. In addition to comparison of BFL-CS with respect to other Deep Learning models, we also compared the accuracy of other implemented solutions (<xref ref-type="table" rid="tab8">Table 2</xref>).</p>
<table-wrap position="float" id="tab8">
<label>Table 2</label>
<caption>
<p>Comparison of technique, dataset &#x0026; accuracy of previous work done on the subject.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">#</th>
<th align="left" valign="top">Paper title &#x0026; Ref No.</th>
<th align="left" valign="top">Techniques used</th>
<th align="left" valign="top">Dataset</th>
<th align="center" valign="top">Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">1</td>
<td align="left" valign="top">Shetty et al. (<xref ref-type="bibr" rid="ref3">3</xref>)</td>
<td align="left" valign="top">SBERT, Universal Sentence Encoders &#x2013;DAN, Universal Sentence Encoders &#x2013; Transformers</td>
<td align="left" valign="top">Data from Kaggle, Youtube, Twitter</td>
<td align="center" valign="top">97.12%</td>
</tr>
<tr>
<td align="left" valign="top">2</td>
<td align="left" valign="top">Fati et al. (<xref ref-type="bibr" rid="ref8">8</xref>)</td>
<td align="left" valign="top">Continuous Bag of Words based Conv1DLSTM</td>
<td align="left" valign="top">Data from Kaggle</td>
<td align="center" valign="top">97.34%</td>
</tr>
<tr>
<td align="left" valign="top">3</td>
<td align="left" valign="top">Bruwaene et al. (<xref ref-type="bibr" rid="ref9">9</xref>)</td>
<td align="left" valign="top">Multi-technique annotation and a ensemble of SVM, CNN &#x0026; XGBoost</td>
<td align="left" valign="top">VISR Dataset</td>
<td align="center" valign="top">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="top">4</td>
<td align="left" valign="top">Bozyigit et al. (<xref ref-type="bibr" rid="ref10">10</xref>)</td>
<td align="left" valign="top">Artificial Neural Networks</td>
<td align="left" valign="top">Twitter &#x2013; Hindi/Marathi</td>
<td align="center" valign="top">91%</td>
</tr>
<tr>
<td align="left" valign="top">5</td>
<td align="left" valign="top">Samee et al. (<xref ref-type="bibr" rid="ref11">11</xref>)</td>
<td align="left" valign="top">FedBERT</td>
<td align="left" valign="top">Twitter</td>
<td align="center" valign="top">92.15%</td>
</tr>
<tr>
<td align="left" valign="top">6</td>
<td align="left" valign="top">Proposed model</td>
<td align="left" valign="top">BFL-CS [Blockchain, Federate Learning, Deep Learning (LSTM &#x0026; DBN in-tandem)].</td>
<td align="left" valign="top">Cyberbullying dataset</td>
<td align="center" valign="top">98.2%</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The table suggests that models using sentence encoders (SBERT, DAN) perform well on publicly available data (Kaggle, Youtube, Twitter) and achieve high accuracy (above 97%). The model using a multi-technique approach (SVM, CNN, XGBoost) shows competitive performance on a specific dataset (VISR) (<xref ref-type="bibr" rid="ref37">37</xref>, <xref ref-type="bibr" rid="ref38">38</xref>). The BFL-CS model, which combines blockchain, federated learning, and deep learning (LSTM &#x0026; DBN), achieves the highest accuracy but the data source is not specified.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="sec26">
<label>5</label>
<title>Conclusion</title>
<p>The study done on the paper is a novel approach named Blockchain &#x0026; Federated Learning based Cybersecurity Solution (BFL-CS) Algorithm for detection and prevention of Cyberbullying in social media. In this study, LSTM-DBN in-tandem is utilized along with block chain based federated learning. We see from our design that a major roadblock of the proposed methodology is the usage of multiple technologies in the model, therefore making it very complex for implementation, particularly in implementation of Federated Learning where two complex deep learning methods are already running, while FL is being carried out across the blocks of a real time updated ledger. This level of interconnectedness with various cutting edge technologies will required significant computational resources and strong network data transfer capabilities, however, we have tried to solve this problem by keeping only one epoch of Block-chain updation post training of data, when we increase the frequency of block updations, this approach may prove to very computationally expensive, as each updation will need a hashing process and consensus building. In the future, we should explore in making the blockchain and vanilla federated learning processes more effective. At this point of time, we have high efficacy with respect to the Deep Learning engine, however, this only contributes to only a fraction of what this approach is all about. However, handling the federated learning layer is very crucial when the size of data increases. While there has been attempts in the past at making this process more efficient, however, all of these have created compromises in the security part of it. Therefore, the future scope of work will play out in this direction. In the future scope of work, we try in developing an in-line module in one of the social networks to do a real time reporting and correction of cyberbullying online.</p>
</sec>
<sec sec-type="data-availability" id="sec27">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.</p>
</sec>
<sec sec-type="author-contributions" id="sec28">
<title>Author contributions</title>
<p>AA: Conceptualization, Formal analysis, Funding acquisition, Methodology, Project administration, Writing &#x2013; original draft. AM: Conceptualization, Formal analysis, Data curation, Validation, Visualization, Writing &#x2013; review &#x0026; editing.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="sec29">
<title>Funding</title>
<p>The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This research work was funded by Institutional Fund Projects under grant no. (IFPIP: 55-865-1442). Therefore, authors gratefully acknowledge technical and financial support from the Ministry of Education and King Abdulaziz university, DSR, Jeddah, Saudi Arabia.</p>
</sec>
<sec sec-type="COI-statement" id="sec30">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="sec100" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1">
<label>1.</label>
<citation citation-type="other"><person-group person-group-type="author">
<name><surname>Djuraskovic</surname> <given-names>O.</given-names></name>
</person-group>. <article-title>Cyberbullying statistics, facts, and trends (2023) with charts. FirstSiteGuide</article-title>. (<year>2023</year>). <comment>Available at:</comment> <ext-link xlink:href="https://firstsiteguide.com/cyberbullying-stats/" ext-link-type="uri">https://firstsiteguide.com/cyberbullying-stats/</ext-link></citation>
</ref>
<ref id="ref2">
<label>2.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gohal</surname> <given-names>G</given-names></name> <name><surname>Alqassim</surname> <given-names>A</given-names></name> <name><surname>Eltyeb</surname> <given-names>E</given-names></name> <name><surname>Rayyani</surname> <given-names>A</given-names></name> <name><surname>Hakami</surname> <given-names>B</given-names></name> <name><surname>Al Faqih</surname> <given-names>A</given-names></name> <etal/></person-group>. <article-title>Prevalence and related risks of cyberbullying and its effects on adolescent</article-title>. <source>BMC Psychiatry</source>. (<year>2023</year>) <volume>23</volume>:<fpage>39</fpage>. doi: <pub-id pub-id-type="doi">10.1186/s12888-023-04542-0</pub-id>, PMID: <pub-id pub-id-type="pmid">36641459</pub-id></citation>
</ref>
<ref id="ref3">
<label>3.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shetty</surname> <given-names>NP</given-names></name> <name><surname>Muniyal</surname> <given-names>B</given-names></name> <name><surname>Priyanshu</surname> <given-names>A</given-names></name> <name><surname>Das</surname> <given-names>VR</given-names></name></person-group>. <article-title>FedBully: a cross-device federated approach for privacy enabled cyber bullying detection using sentence encoders</article-title>. <source>J Cyber Sec Mobil</source>. (<year>2023</year>) <volume>12</volume>:<fpage>465</fpage>&#x2013;<lpage>96</lpage>. doi: <pub-id pub-id-type="doi">10.13052/jcsm2245-1439.1242</pub-id></citation>
</ref>
<ref id="ref4">
<label>4.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chakraborty</surname> <given-names>K</given-names></name> <name><surname>Bhatia</surname> <given-names>S</given-names></name> <name><surname>Bhattacharyya</surname> <given-names>S</given-names></name> <name><surname>Platos</surname> <given-names>J</given-names></name> <name><surname>Bag</surname> <given-names>R</given-names></name> <name><surname>Hassanien</surname> <given-names>AE</given-names></name></person-group>. <article-title>Sentiment analysis of COVID-19 tweets by deep learning classifiers&#x2014;a study to show how popularity is affecting accuracy in social media</article-title>. <source>Appl Soft Comput</source>. (<year>2020</year>) <volume>97</volume>:<fpage>106754</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.asoc.2020.106754</pub-id>, PMID: <pub-id pub-id-type="pmid">33013254</pub-id></citation>
</ref>
<ref id="ref5">
<label>5.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yosep</surname> <given-names>I</given-names></name> <name><surname>Hikmat</surname> <given-names>R</given-names></name> <name><surname>Mardhiyah</surname> <given-names>A</given-names></name></person-group>. <article-title>Preventing cyberbullying and reducing its negative impact on students using E-parenting: a scoping review</article-title>. <source>Sustain For</source>. (<year>2023</year>) <volume>15</volume>:<fpage>1752</fpage>. doi: <pub-id pub-id-type="doi">10.3390/su15031752</pub-id></citation>
</ref>
<ref id="ref6">
<label>6.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iwendi</surname> <given-names>C</given-names></name> <name><surname>Srivastava</surname> <given-names>G</given-names></name> <name><surname>Khan</surname> <given-names>S</given-names></name> <name><surname>Maddikunta</surname> <given-names>PKR</given-names></name></person-group>. <article-title>Cyberbullying detection solutions based on deep learning architectures</article-title>. <source>Multimedia Systems</source>. (<year>2023</year>) <volume>29</volume>:<fpage>1839</fpage>&#x2013;<lpage>52</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00530-020-00701-5</pub-id></citation>
</ref>
<ref id="ref7">
<label>7.</label>
<citation citation-type="journal"><person-group person-group-type="author">
<name><surname>Sebastiani</surname> <given-names>F</given-names></name>
</person-group>. <article-title>Machine learning in automated text categorization</article-title>. <source>ACM Comput Surv</source>. (<year>2002</year>) <volume>34</volume>:<fpage>1</fpage>&#x2013;<lpage>47</lpage>. doi: <pub-id pub-id-type="doi">10.1145/505282.505283</pub-id></citation>
</ref>
<ref id="ref8">
<label>8.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fati</surname> <given-names>SM</given-names></name> <name><surname>Muneer</surname> <given-names>A</given-names></name> <name><surname>Alwadain</surname> <given-names>A</given-names></name> <name><surname>Balogun</surname> <given-names>AO</given-names></name></person-group>. <article-title>Cyberbullying detection on twitter using deep learning-based attention mechanisms and continuous Bag of words feature extraction</article-title>. <source>Mathematics</source>. (<year>2023</year>) <volume>11</volume>:<fpage>3567</fpage>. doi: <pub-id pub-id-type="doi">10.3390/math11163567</pub-id></citation>
</ref>
<ref id="ref9">
<label>9.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bruwaene</surname> <given-names>DV</given-names></name> <name><surname>Huang</surname> <given-names>Q</given-names></name> <name><surname>Inkpen</surname> <given-names>D</given-names></name></person-group>. <article-title>A multi-platform dataset for detecting cyberbullying in social me-dia</article-title>. <source>Lang Resour Eval</source>. (<year>2020</year>) <volume>54</volume>:<fpage>1</fpage>&#x2013;<lpage>24</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10579-020-09488-3</pub-id></citation>
</ref>
<ref id="ref10">
<label>10.</label>
<citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Bozyigit</surname> <given-names>A.</given-names></name> <name><surname>Utku</surname> <given-names>S.</given-names></name> <name><surname>Nasibo&#x011F;lu</surname> <given-names>E.</given-names></name></person-group>. <article-title>Cyberbullying detection by using artificial neural network models</article-title>. <conf-name>2019 4th International Conference on Computer Science and Engineering (UBMK)</conf-name>, <conf-loc>Samsun, Turkey</conf-loc>. (<year>2019</year>).</citation>
</ref>
<ref id="ref11">
<label>11.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Samee</surname> <given-names>NA</given-names></name> <name><surname>Khan</surname> <given-names>U</given-names></name> <name><surname>Khan</surname> <given-names>S</given-names></name> <name><surname>Jamjoom</surname> <given-names>MM</given-names></name> <name><surname>Sharif</surname> <given-names>M</given-names></name> <name><surname>Kim</surname> <given-names>DH</given-names></name></person-group>. <article-title>Safeguarding online spaces: a powerful fusion of federated learning, word embeddings, and emotional features for cyberbullying detection</article-title>. <source>IEEE Access</source>. (<year>2023</year>) <volume>11</volume>:<fpage>124524</fpage>&#x2013;<lpage>41</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2023.3329347</pub-id></citation>
</ref>
<ref id="ref12">
<label>12.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>W</given-names></name> <name><surname>Lu</surname> <given-names>S</given-names></name> <name><surname>Cai</surname> <given-names>Z</given-names></name> <name><surname>Wang</surname> <given-names>R</given-names></name> <name><surname>Wang</surname> <given-names>L</given-names></name> <name><surname>Yin</surname> <given-names>L</given-names></name></person-group>. <article-title>PAL-BERT: an improved question answering model</article-title>. <source>Comput Model Eng Sci</source>. (<year>2024</year>) <volume>139</volume>:<fpage>2729</fpage>&#x2013;<lpage>45</lpage>. doi: <pub-id pub-id-type="doi">10.32604/cmes.2023.046692</pub-id></citation>
</ref>
<ref id="ref13">
<label>13.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>X</given-names></name> <name><surname>Zhou</surname> <given-names>G</given-names></name> <name><surname>Kong</surname> <given-names>M</given-names></name> <name><surname>Yin</surname> <given-names>Z</given-names></name> <name><surname>Li</surname> <given-names>X</given-names></name> <name><surname>Yin</surname> <given-names>L</given-names></name> <etal/></person-group>. <article-title>Developing multi-labelled corpus of twitter short texts: a semi-automatic method</article-title>. <source>Systems</source>. (<year>2023</year>) <volume>11</volume>:<fpage>390</fpage>. doi: <pub-id pub-id-type="doi">10.3390/systems11080390</pub-id></citation>
</ref>
<ref id="ref14">
<label>14.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Z</given-names></name> <name><surname>Kong</surname> <given-names>X</given-names></name> <name><surname>Liu</surname> <given-names>S</given-names></name> <name><surname>Yang</surname> <given-names>Z</given-names></name></person-group>. <article-title>Effects of computer-based mind mapping on students' reflection, cognitive presence, and learning outcomes in an online course</article-title>. <source>Distance Educ</source>. (<year>2023</year>) <volume>44</volume>:<fpage>544</fpage>&#x2013;<lpage>62</lpage>. doi: <pub-id pub-id-type="doi">10.1080/01587919.2023.2226615</pub-id></citation>
</ref>
<ref id="ref15">
<label>15.</label>
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>JM</given-names></name> <name><surname>Burchfiel</surname> <given-names>B</given-names></name> <name><surname>Zhu</surname> <given-names>X</given-names></name> <name><surname>Bellmore</surname> <given-names>A</given-names></name></person-group>. <article-title>An examination of regret in bullying tweets</article-title>. In <source>Proceedings of the 2013 conference of the North American chapter of the association for computational linguistics: human language technologies</source> (<year>2013</year>) pp. <fpage>697</fpage>&#x2013;<lpage>702</lpage>.</citation>
</ref>
<ref id="ref16">
<label>16.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Dadvar</surname> <given-names>M</given-names></name> <name><surname>Trieschnigg</surname> <given-names>D</given-names></name> <name><surname>Ordelman</surname> <given-names>R</given-names></name> <name><surname>de Jong</surname> <given-names>F</given-names></name></person-group>. <article-title>Improving cyberbullying detection with user context</article-title> In: <person-group person-group-type="editor"><name><surname>Serdyukov</surname> <given-names>P</given-names></name> <name><surname>Braslavski</surname> <given-names>P</given-names></name> <name><surname>Kuznetsov</surname> <given-names>SO</given-names></name> <name><surname>Kamps</surname> <given-names>J</given-names></name> <name><surname>R&#x00FC;ger</surname> <given-names>S</given-names></name> <name><surname>Agichtein</surname> <given-names>E</given-names></name> <etal/></person-group>, editors. <source>ECIR 2013. LNCS</source>, vol. <volume>7814</volume>. <publisher-loc>Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name> (<year>2013</year>). <fpage>693</fpage>&#x2013;<lpage>6</lpage>.</citation>
</ref>
<ref id="ref17">
<label>17.</label>
<citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Foong</surname> <given-names>Y. J.</given-names></name> <name><surname>Oussalah</surname> <given-names>M.</given-names></name></person-group>, "<article-title>Cyberbullying system detection and analysis</article-title>," <conf-name>2017 European intelligence and security in-formatics conference (EISIC)</conf-name>, <conf-loc>Athens, Greece</conf-loc>. (<year>2017</year>), pp. <fpage>40</fpage>&#x2013;<lpage>46</lpage>.</citation>
</ref>
<ref id="ref18">
<label>18.</label>
<citation citation-type="other"><person-group person-group-type="author">
<name><surname>Poeter</surname> <given-names>D</given-names></name>
</person-group>. <article-title>Study: a quarter of parents say their child involved in cyberbullying</article-title>. (<year>2011</year>). <comment>Available at:</comment> <ext-link xlink:href="https://www.pcmag.com/article2/0,2817,2388540,00.asp" ext-link-type="uri">https://www.pcmag.com/article2/0,2817,2388540,00.asp</ext-link></citation>
</ref>
<ref id="ref19">
<label>19.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salawu</surname> <given-names>S</given-names></name> <name><surname>He</surname> <given-names>Y</given-names></name> <name><surname>Lumsden</surname> <given-names>J</given-names></name></person-group>. <article-title>Approaches to automated detection of cyberbullying: a survey</article-title>. <source>IEEE Trans Affect Comput</source>. (<year>2020</year>) <volume>11</volume>:<fpage>3</fpage>&#x2013;<lpage>24</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TAFFC.2017.2761757</pub-id></citation>
</ref>
<ref id="ref20">
<label>20.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rosa</surname> <given-names>H</given-names></name> <name><surname>Pereira</surname> <given-names>N</given-names></name> <name><surname>Ribeiro</surname> <given-names>R</given-names></name> <name><surname>Ferreira</surname> <given-names>PC</given-names></name> <name><surname>Carvalho</surname> <given-names>JP</given-names></name> <name><surname>Oliveira</surname> <given-names>S</given-names></name> <etal/></person-group>. <article-title>Automatic cyberbullying detection: a systematic review</article-title>. <source>Comput Hum Behav</source>. (<year>2019</year>) <volume>93</volume>:<fpage>333</fpage>&#x2013;<lpage>45</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.chb.2018.12.021</pub-id></citation>
</ref>
<ref id="ref21">
<label>21.</label>
<citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Nadali</surname> <given-names>S.</given-names></name> <name><surname>Murad</surname> <given-names>M. A. A.</given-names></name> <name><surname>Sharef</surname> <given-names>N. M.</given-names></name> <name><surname>Mustapha</surname> <given-names>A.</given-names></name> <name><surname>Shojaee</surname> <given-names>S.</given-names></name></person-group>. <article-title>A review of cyberbullying detection: an overview</article-title>. <conf-name>Proceedings of the 2013 13th international conference on intellient systems design and applications</conf-name>. <conf-loc>Salangor, Malaysia</conf-loc>. (<year>2013</year>), pp. <fpage>325</fpage>&#x2013;<lpage>330</lpage>.</citation>
</ref>
<ref id="ref22">
<label>22.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>S</given-names></name> <name><surname>Razi</surname> <given-names>A</given-names></name> <name><surname>Stringhini</surname> <given-names>G</given-names></name> <name><surname>Wisniewski</surname> <given-names>PJ</given-names></name> <name><surname>De Choudhury</surname> <given-names>M</given-names></name></person-group>. <article-title>A human-centered systematic literature review of cyberbullying detection algorithms</article-title>. <source>Proc. ACM Hum. Comput. Interact.</source> (<year>2021</year>) <volume>5</volume>:<fpage>325</fpage>. doi: <pub-id pub-id-type="doi">10.1145/3476066</pub-id></citation>
</ref>
<ref id="ref23">
<label>23.</label>
<citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Potha</surname> <given-names>N.</given-names></name> <name><surname>Maragoudakis</surname> <given-names>M.</given-names></name></person-group>, <article-title>Cyberbullying detection using time series modeling</article-title>. <conf-name>Proceedings of the 2014 IEEE international conference on data mining workshop</conf-name>, <conf-loc>Shenzhen, China</conf-loc>. (<year>2014</year>), pp. <fpage>373</fpage>&#x2013;<lpage>382</lpage>.</citation>
</ref>
<ref id="ref24">
<label>24.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perera</surname> <given-names>A</given-names></name> <name><surname>Fernando</surname> <given-names>P</given-names></name></person-group>. <article-title>Accurate cyberbullying detection and prevention on social media</article-title>. <source>Proc Comput Sci</source>. (<year>2021</year>) <volume>181</volume>:<fpage>605</fpage>&#x2013;<lpage>11</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.procs.2021.01.207</pub-id></citation>
</ref>
<ref id="ref25">
<label>25.</label>
<citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Pawar</surname> <given-names>R.</given-names></name> <name><surname>Raje</surname> <given-names>R. R.</given-names></name></person-group>. <article-title>Multilingual cyberbullying detection system</article-title>. <conf-name>Proceedings of the 2019 IEEE international conference on electro in-formation technology (EIT)</conf-name>, <conf-loc>Brookings, SD, USA</conf-loc>. (<year>2019</year>), pp. <fpage>40</fpage>&#x2013;<lpage>44</lpage>.</citation>
</ref>
<ref id="ref26">
<label>26.</label>
<citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Haidar</surname> <given-names>B.</given-names></name> <name><surname>Chamoun</surname> <given-names>M.</given-names></name> <name><surname>Serhrouchni</surname> <given-names>A.</given-names></name></person-group>. <article-title>Multilingual cyberbullying detection system: detecting cyberbullying in Arabic content</article-title>. <conf-name>Proceedings of the 2017 1st cyber security in networking conference (CSNet)</conf-name>, <conf-loc>Rio de Janeiro, Brazil</conf-loc>. (<volume>2017</volume>), pp. <fpage>1</fpage>&#x2013;<lpage>8</lpage>.</citation>
</ref>
<ref id="ref27">
<label>27.</label>
<citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Kargutkar</surname> <given-names>S. M.</given-names></name> <name><surname>Chitre</surname> <given-names>V.</given-names></name></person-group>. <article-title>A study of cyberbullying detection using machine learning techniques</article-title>. <conf-name>Proceedings of the 2020 Fourth In-ternational Conference on Computing Methodologies and Communication (ICCMC)</conf-name>, <conf-loc>Erode, India</conf-loc>. (<year>2020</year>), pp. <fpage>734</fpage>&#x2013;<lpage>739</lpage>.</citation>
</ref>
<ref id="ref28">
<label>28.</label>
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Dinakar</surname> <given-names>K</given-names></name> <name><surname>Reichart</surname> <given-names>R</given-names></name> <name><surname>Lieberman</surname> <given-names>H</given-names></name></person-group>. <article-title>Modeling the detection of textual cyberbullying</article-title> In: <source>Proceedings of the International AAAI Conference on Web and Social Media</source> (<year>2011</year>). Vol. <volume>5</volume>, pp. <fpage>11</fpage>&#x2013;<lpage>17</lpage>.</citation>
</ref>
<ref id="ref29">
<label>29.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bhatia</surname> <given-names>S</given-names></name> <name><surname>Sharma</surname> <given-names>M</given-names></name> <name><surname>Bhatia</surname> <given-names>KK</given-names></name> <name><surname>Das</surname> <given-names>P</given-names></name></person-group>. <article-title>Opinion target extraction with sentiment analysis</article-title>. <source>Int J Comput</source>. (<year>2018</year>) <volume>17</volume>:<fpage>136</fpage>&#x2013;<lpage>42</lpage>. doi: <pub-id pub-id-type="doi">10.47839/ijc.17.3.1033</pub-id></citation>
</ref>
<ref id="ref30">
<label>30.</label>
<citation citation-type="other"><person-group person-group-type="author">
<collab id="coll1">Cyberbullying Dataset</collab>
</person-group>. (<year>2020</year>). <comment>Available at:</comment> <ext-link xlink:href="https://www.kaggle.com/datasets/saurabhshahane/cyberbullying-dataset" ext-link-type="uri">https://www.kaggle.com/datasets/saurabhshahane/cyberbullying-dataset</ext-link></citation>
</ref>
<ref id="ref31">
<label>31.</label>
<citation citation-type="other"><person-group person-group-type="author">
<collab id="coll2">KLEJ</collab>
</person-group>. <source>The KLEJ benchmark (Kompleksowa Lista Ewaluacji J&#x0119;zykowych) is a set of nine evaluation tasks for the Polish language under-standing</source>. (<year>2020</year>)</citation>
</ref>
<ref id="ref32">
<label>32.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Basheer</surname> <given-names>S</given-names></name> <name><surname>Bhatia</surname> <given-names>S</given-names></name> <name><surname>Sakri</surname> <given-names>SB</given-names></name></person-group>. <article-title>Computational modeling of dementia prediction using deep neural network: analysis on OASIS dataset</article-title>. <source>IEEE Access</source>. (<year>2021</year>) <volume>9</volume>:<fpage>42449</fpage>&#x2013;<lpage>62</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2021.3066213</pub-id></citation>
</ref>
<ref id="ref33">
<label>33.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Nahar</surname> <given-names>V</given-names></name> <name><surname>Al-Maskari</surname> <given-names>S</given-names></name> <name><surname>Li</surname> <given-names>X</given-names></name> <name><surname>Pang</surname> <given-names>C</given-names></name></person-group>. <article-title>Semi-supervised learning for cyberbullying detection in social networks</article-title> In: <person-group person-group-type="editor"><name><surname>Wang</surname> <given-names>H</given-names></name> <name><surname>Sharaf</surname> <given-names>MA</given-names></name></person-group>, editors. <source>Databases theory and applications. ADC 2014. Lecture notes in computer science</source>. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name> (<year>2014</year>)</citation>
</ref>
<ref id="ref34">
<label>34.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>X</given-names></name> <name><surname>Wang</surname> <given-names>S</given-names></name> <name><surname>Lu</surname> <given-names>S</given-names></name> <name><surname>Yin</surname> <given-names>Z</given-names></name> <name><surname>Li</surname> <given-names>X</given-names></name> <name><surname>Yin</surname> <given-names>L</given-names></name> <etal/></person-group>. <article-title>Adapting feature selection algorithms for the classification of Chinese texts</article-title>. <source>Systems</source>. (<year>2023</year>) <volume>11</volume>:<fpage>483</fpage>. doi: <pub-id pub-id-type="doi">10.3390/systems11090483</pub-id></citation>
</ref>
<ref id="ref35">
<label>35.</label>
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Yin</surname> <given-names>D</given-names></name> <name><surname>Xue</surname> <given-names>Z</given-names></name> <name><surname>Hong</surname> <given-names>L</given-names></name> <name><surname>Davisoni</surname> <given-names>BD</given-names></name> <name><surname>Kontostathis</surname> <given-names>A</given-names></name> <name><surname>Edwards</surname> <given-names>L</given-names></name></person-group>. <article-title>Detection of harassment on web 2.0</article-title>. In <source>Proceedings of the Content Analysis in the WEB, 2(0)</source> (<year>2009</year>) p. 1&#x2013;7.</citation>
</ref>
<ref id="ref36">
<label>36.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>J</given-names></name> <name><surname>Yang</surname> <given-names>K</given-names></name> <name><surname>Xiao</surname> <given-names>Z</given-names></name> <name><surname>Jiang</surname> <given-names>H</given-names></name> <name><surname>Xu</surname> <given-names>S</given-names></name> <name><surname>Dustdar</surname> <given-names>S</given-names></name></person-group>. <article-title>Improving commute experience for private car users via blockchain-enabled multitask learning</article-title>. <source>IEEE Internet Things J</source>. (<year>2023</year>) <volume>10</volume>:<fpage>21656</fpage>&#x2013;<lpage>69</lpage>. doi: <pub-id pub-id-type="doi">10.1109/JIOT.2023.3317639</pub-id></citation>
</ref>
<ref id="ref37">
<label>37.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shen</surname> <given-names>J</given-names></name> <name><surname>Sheng</surname> <given-names>H</given-names></name> <name><surname>Wang</surname> <given-names>S</given-names></name> <name><surname>Cong</surname> <given-names>R</given-names></name> <name><surname>Yang</surname> <given-names>D</given-names></name> <name><surname>Zhang</surname> <given-names>Y</given-names></name></person-group>. <article-title>Blockchain-based distributed multiagent reinforcement learning for collaborative multiobject tracking framework</article-title>. <source>IEEE Trans Comput</source>. (<year>2024</year>) <volume>73</volume>:<fpage>778</fpage>&#x2013;<lpage>88</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TC.2023.3343102</pub-id></citation>
</ref>
<ref id="ref38">
<label>38.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rahmani</surname> <given-names>MKI</given-names></name> <name><surname>Shuaib</surname> <given-names>M</given-names></name> <name><surname>Alam</surname> <given-names>S</given-names></name> <name><surname>Siddiqui</surname> <given-names>ST</given-names></name> <name><surname>Ahmad</surname> <given-names>S</given-names></name> <name><surname>Bhatia</surname> <given-names>S</given-names></name> <etal/></person-group>. <article-title>Blockchain-based trust management framework for cloud computing-based internet of medical things (IoMT): a systematic review</article-title>. <source>Comput Intell Neurosci</source>. (<year>2022</year>) <volume>2022</volume>:<fpage>9766844</fpage>. doi: <pub-id pub-id-type="doi">10.1155/2022/9766844</pub-id></citation>
</ref>
</ref-list>
</back>
</article>