<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<?covid-19-tdm?>
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Public Health</journal-id>
<journal-title>Frontiers in Public Health</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Public Health</abbrev-journal-title>
<issn pub-type="epub">2296-2565</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpubh.2021.781827</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Public Health</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Neural Network Based Mental Depression Identification and Sentiments Classification Technique From Speech Signals: A COVID-19 Focused Pandemic Study</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Ahmed</surname> <given-names>Syed Thouheed</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1490835/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Singh</surname> <given-names>Dollar Konjengbam</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Basha</surname> <given-names>Syed Muzamil</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1453621/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Abouel Nasr</surname> <given-names>Emad</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Kamrani</surname> <given-names>Ali K.</given-names></name>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Aboudaif</surname> <given-names>Mohamed K.</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<xref ref-type="corresp" rid="c002"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1490507/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>School of Computing and Information Technology, REVA University</institution>, <addr-line>Bengaluru</addr-line>, <country>India</country></aff>
<aff id="aff2"><sup>2</sup><institution>ICT for Internet and Multimedia, University of Padua</institution>, <addr-line>Padua</addr-line>, <country>Italy</country></aff>
<aff id="aff3"><sup>3</sup><institution>School of Computer Science and Engineering, REVA University</institution>, <addr-line>Bengaluru</addr-line>, <country>India</country></aff>
<aff id="aff4"><sup>4</sup><institution>Industrial Engineering Department, College of Engineering, King Saud University</institution>, <addr-line>Riyadh</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff5"><sup>5</sup><institution>Industrial Engineering Department, College of Engineering, University of Houston</institution>, <addr-line>Houston, TX</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Celestine Iwendi, School of Creative Technologies University of Bolton, United Kingdom</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Parvaiz Ahmad Naik, Xi&#x00027;an Jiaotong University, China; Ebuka Ibeke, Robert Gordon University, United Kingdom</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Emad Abouel Nasr <email>eabdelghany&#x00040;ksu.edu.sa</email></corresp>
<corresp id="c002">Mohamed K. Aboudaif <email>maboudaif&#x00040;ksu.edu.sa</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Digital Public Health, a section of the journal Frontiers in Public Health</p></fn></author-notes>
<pub-date pub-type="epub">
<day>06</day>
<month>12</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>9</volume>
<elocation-id>781827</elocation-id>
<history>
<date date-type="received">
<day>23</day>
<month>09</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>08</day>
<month>11</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2021 Ahmed, Singh, Basha, Abouel Nasr, Kamrani and Aboudaif.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Ahmed, Singh, Basha, Abouel Nasr, Kamrani and Aboudaif</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>COVID-19 (SARS-CoV-2) was declared as a global pandemic by the World Health Organization (WHO) in February 2020. This led to previously unforeseen measures that aimed to curb its spread, such as the lockdown of cities, districts, and international travel. Various researchers and institutions have focused on multidimensional opportunities and solutions in encountering the COVID-19 pandemic. This study focuses on mental health and sentiment validations caused by the global lockdowns across the countries, resulting in a mental disability among individuals. This paper discusses a technique for identifying the mental state of an individual by sentiment analysis of feelings such as anxiety, depression, and loneliness caused by isolation and pauses to the normal chains of operations in daily life. The research uses a Neural Network (NN) to resolve and extract patterns and validate threshold trained datasets for decision making. This technique was used to validate 2,173 global speech samples, and the resulting accuracy of mental state and sentiments are identified with 93.5% accuracy in classifying the behavioral patterns of patients suffering from COVID-19 and pandemic-influenced depression.</p></abstract>
<kwd-group>
<kwd>sentiment extraction</kwd>
<kwd>speech signal processing</kwd>
<kwd>COVID-19</kwd>
<kwd>mental depression</kwd>
<kwd>neural network</kwd>
</kwd-group>
<counts>
<fig-count count="6"/>
<table-count count="3"/>
<equation-count count="11"/>
<ref-count count="20"/>
<page-count count="9"/>
<word-count count="3526"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>The world is at present facing an uncertain time due to the global pandemic caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), also known as COVID-19. The pandemic has forced nations to exercise lockdown as a preventive measure to slow the spread of the virus. The pandemic has resulted in economic failure and disruptions in the supply chain all over the world. There was a race for a vaccine among modern drug and research organizations. The pandemic has caused major adverse effects such as mental depression, isolation, anxiety, and loneliness besides respiratory disorders and aligned symptoms. Depression and other mental health issues have been caused by lockdown and restrictions in travel and work, with a new normal social life now conducted via technological platforms.</p>
<p>The pandemic has bought a sense of maturity and adverse implications concerning psychosocial behaviors and mental health implications such as depression, anxiety, and loneliness. In this research, a systematic evaluation was conducted on the behavior of users based on speech signals, which were recorded using a machine learning technique to extract keywords and classify data using sentimental analysis techniques (<xref ref-type="bibr" rid="B1">1</xref>). The research in this article also focuses on identifying the user&#x00027;s mental state via the speech signals recorded over technological platforms used for virtual meetings and other gatherings (<xref ref-type="bibr" rid="B2">2</xref>). The research aims to provide schematic evaluation and validation approaches and classify patients based on medical conditions.</p>
</sec>
<sec id="s2">
<title>Literature Survey</title>
<p>The global pandemic situation under COVID-19 has left traces of various adverse effects on people, which are the result of isolation, lockdown, mental health destabilization, and much more. The authors Singh et al. (<xref ref-type="bibr" rid="B3">3</xref>) and Naik et al. (<xref ref-type="bibr" rid="B4">4</xref>) have discussed the impact caused by COVID-19 and lockdown on the younger generation, focussing on children&#x00027;s behavior and reactions to the new normal. The study shows the overall implications of isolation on children and adolescents. Pfefferbaum and North (<xref ref-type="bibr" rid="B5">5</xref>) have discussed the impact and relationship of mental stress caused due to the global pandemic situation, with detailed insights into public health emergencies and the influence of the pandemic on looming health conditions. Furthermore, a discussion on the challenges faced by health care workers (HCW) and their state of mental stress is documented by Spoorthy et al. (<xref ref-type="bibr" rid="B6">6</xref>). The HCWs are frontline attributes and hence require assistance in evaluating and validating mental health via the main mode of communication now used, i.e., speech signals through digital media platforms and applications and a similar discussion is highlighted in other studies (<xref ref-type="bibr" rid="B7">7</xref>, <xref ref-type="bibr" rid="B8">8</xref>).</p>
<p>Some studies are focused on the terms of technological solutions for the mental distress caused by the pandemic. These solutions have outlined the use of a telemedicine approach for reaching the maximum and remote population of a developing country like India. A study by Ahmed et al. (<xref ref-type="bibr" rid="B9">9</xref>) discussed Multidimensional Optimal Medical Dataset processing under a telemedicine channel. These MooM datasets include a signal processing unit for a standardized approach and can be used for intimated processing in the proposed study, with a supported algorithm from (<xref ref-type="bibr" rid="B10">10</xref>). The method of detecting and validating speech signals is also proposed in this article, based on the influence of telemedicine approaches with a numerical clustering validation by (<xref ref-type="bibr" rid="B11">11</xref>) and (<xref ref-type="bibr" rid="B12">12</xref>).</p>
<p>The latest findings in the survey are recorded with real-time datasets as discussed in (<xref ref-type="bibr" rid="B4">4</xref>). This approach aims to validate treatment and handling, focusing on pandemic control and coordination. The prediction and modeling of the pandemic are discussed by Iwendi et al. (<xref ref-type="bibr" rid="B13">13</xref>) and Ngabod et al. (<xref ref-type="bibr" rid="B14">14</xref>), who propose a technique for classifying pandemic growth in smart cities. Under the process processing state, this dedicated networking model can be utilized, as discussed by Ahmed et al., under a dynamic user cluster grouping approach (<xref ref-type="bibr" rid="B15">15</xref>&#x02013;<xref ref-type="bibr" rid="B17">17</xref>). These developments have provided a reliable solution for handling pandemic data using text mining and decision support. The classification of Covid-19 studies and surveys are reported and validated by (<xref ref-type="bibr" rid="B18">18</xref>, <xref ref-type="bibr" rid="B19">19</xref>).</p>
</sec>
<sec sec-type="methods" id="s3">
<title>Methodology</title>
<p>The proposed methodology aims to focus on the detection and validation of speech signals via a depression and mental disorder identification based on speech signal processing using a neural network (NN). The process is defined using mass datasets from 2,173 speech samples, as discussed in the architecture model in <xref ref-type="fig" rid="F1">Figure 1</xref>. The agenda of the proposed technique is to restore a correlation with trained datasets in extracting and evaluating the samples of speech and classifying on demand. These speech signals are interdependent and have a higher order of distinction in recovering and validating the sample of COVID-19 patients&#x00027; mental stability and sentiments (<xref ref-type="bibr" rid="B20">20</xref>).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Architectural diagram of the proposed technique toward decision making in speech signals.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpubh-09-781827-g0001.tif"/>
</fig>
<p>The processing datasets are computed in a centralized database with user-to-user interface coordination, thereby generating a pool of databases consisting of raw and unprocessed data from the users. The process is initiated with data alignment and pre-processing techniques, as discussed in the mathematical modeling of the proposed technique. The process is designed with a trained database of the speech signals with a heap address of thresholds relating to global attributes such as country, location, gender, age, and professional practice.</p>
<p>The trained datasets provide the threshold process for the extracted attributes of the user input signals. The process is designed with a comparative validation model to assure the process execution, as demonstrated in <xref ref-type="fig" rid="F2">Figure 2</xref>. The comparative model evaluates the detailed execution process, such as the pattern extraction and clustering of signal samples in the form of JPEG intermediate files and a dedicated intermediate database, generating a cluster pool for segregation of samples based on ROI as demonstrated in <xref ref-type="fig" rid="F3">Figure 3</xref>. This results in a threshold value comparison and thus provides a single decision and classification of the user&#x00027;s mental condition.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Comparative model for validating speech signals in distress detection.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpubh-09-781827-g0002.tif"/>
</fig>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>ROI on floating speech samples of multi-users.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpubh-09-781827-g0003.tif"/>
</fig>
</sec>
<sec id="s4">
<title>Mathematical Representation</title>
<p>The computation of the speech signals and detection of mental stress is achieved under the processed instruction architecture, as demonstrated in <xref ref-type="fig" rid="F1">Figure 1</xref>. The process aims to validate the signals into coordination datasets with a synchronization approach of proving learning and pooling clusters of similar patterns, as demonstrated in <xref ref-type="fig" rid="F3">Figure 3</xref>. The mathematical approach is discussed in this section.</p>
<sec>
<title>Attribute Extraction and Dependencies Validation</title>
<p>Consider a dataset <italic>(D)</italic> with a raw calibrated ecosystem of attributes <italic>(A)</italic> where each of attributes <italic>A</italic>=<italic>{A</italic><sub>1</sub><italic>,A</italic><sub>2</sub><italic>,A</italic><sub>3</sub><italic>,&#x02026;&#x02026;,A</italic><sub><italic>n</italic></sub><italic>}</italic> such that each attribute <italic>(A</italic><sub><italic>i</italic></sub><italic>)</italic> resembles the paradigm of operation, as in Equation (1).</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>A</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>D</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac></mml:mrow></mml:mrow></mml:mstyle></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>Where, each of the <italic>i</italic><sup><italic>th</italic></sup> attributes, has a correlated paradigm of operation and process extraction. Thus, the extracted attributes <italic>(A</italic><sub><italic>e</italic></sub><italic>)</italic> are as shown in Equation (2).</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mo stretchy='true'>(</mml:mo><mml:mo>&#x00394;</mml:mo><mml:msub><mml:mi>D</mml:mi><mml:mi>z</mml:mi></mml:msub><mml:mo>.</mml:mo><mml:msubsup><mml:mrow><mml:msup><mml:mstyle mathsize='140%' displaystyle='true'><mml:mo>&#x02211;</mml:mo></mml:mstyle><mml:mtext>&#x000A0;</mml:mtext></mml:msup></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msub><mml:mi>A</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>D</mml:mi></mml:mrow></mml:mfrac><mml:mo stretchy='true'>)</mml:mo><mml:mo>.</mml:mo><mml:mo>&#x00394;</mml:mo><mml:mi>T</mml:mi></mml:mrow></mml:mrow></mml:mstyle></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>Where the extracted attribute <italic>(A</italic><sub><italic>e</italic></sub><italic>)</italic> is processed over the raw attributes set, in extracting the most relevant threshold attributes such as the peak frequency of a word or a repeated phrase of a sentence with a dilution of &#x00394;<italic>D</italic><sub><italic>z</italic></sub> and mapping with &#x00394;<italic>T</italic> as a threshold paradigm in validating all processing attributes <italic>(A</italic><sub><italic>e</italic></sub><italic>)</italic> in the speech signal.</p>
</sec>
<sec>
<title>Segmentation of Samples</title>
<p>Samples are primarily divided into extracted attributes <italic>(A</italic><sub><italic>e</italic></sub><italic>)</italic> sets, such that each of the attribute Region of Interest (ROI) is highlighted and marked in the entire speech signal, as shown in <xref ref-type="fig" rid="F3">Figure 3</xref>.</p>
<p>Consider the segmentation <italic>(S)</italic> of the overall input signal (speech signal) with a highlighted extracted attribute <italic>(A</italic><sub><italic>e</italic></sub><italic>)</italic>. On consideration, each attribute in the signal has an occupancy time <italic>(</italic>&#x00394;<italic>t)</italic> in operating, and thus, a reflective ratio of division is processed based on CNN&#x00027;s evaluation paradigms.</p>
<p>The signal <italic>(S)</italic> of an independent sample <italic>(S</italic><sub><italic>i</italic></sub><italic>)</italic> tends to occur in ROI in an independent location of the time matrix <italic>(</italic>&#x00394;<italic>t)</italic>. Hence, the segmentation of signal <italic>(S)</italic> is as shown in Equation (4).</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mrow><mml:mi>S</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>R</mml:mi></mml:mrow></mml:mfrac><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msubsup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:msub><mml:mi>S</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mstyle></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>s</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:mrow></mml:mstyle></mml:mrow></mml:math></disp-formula>
<disp-formula id="E4"><label>(4)</label><mml:math id="M4"><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:mo>&#x02234;</mml:mo><mml:mi>S</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>R</mml:mi><mml:mo>.</mml:mo><mml:mo>&#x00394;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mo stretchy='true'>(</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msub><mml:mi>S</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo stretchy='true'>)</mml:mo><mml:mo>.</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>A</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:mrow></mml:mstyle></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>.</mml:mo><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>Where each signal strength is measured in &#x00394;<italic>R</italic> with a signal time &#x00394;<italic>t</italic>, for all regional attributes extraction; hence, for segmentation to be processed completely, the schematics of each attribute signal strength <italic>(</italic>&#x00394;<italic>R)</italic> is then computed with an exhausted peak of ROI from the signal as shown in <xref ref-type="fig" rid="F3">Figure 3</xref>.</p>
</sec>
<sec>
<title>Pattern Extraction and Schema Alignment of Datasets</title>
<p>The process of pattern extraction is calibrated with the internally divided segments of datasets. These datasets are processed with a frequency <italic>(f)</italic> such that the internal segments <italic>(S)</italic>= <italic>{S</italic><sub>1</sub><italic>,S</italic><sub>2</sub><italic>,S</italic><sub>3</sub><italic>,&#x02026;&#x02026;.,S</italic><sub><italic>n</italic></sub><italic>}</italic> has calibrated frequencies <italic>(f)</italic>=<italic>{f</italic><sub>1</sub><italic>,f2,f</italic><sub>3</sub><italic>,&#x02026;&#x02026;,f</italic><sub><italic>n</italic></sub><italic>}</italic>. Thus, the inter-correlated frequencies can be defined and associated as <italic>(S</italic><sub><italic>f</italic></sub><italic>)</italic>=<italic>{S</italic><sub><italic>f</italic>1</sub><italic>,S</italic><sub><italic>f</italic>2</sub><italic>,S</italic><sub><italic>f</italic>3</sub><italic>,&#x02026;&#x02026;,S</italic><sub><italic>fn</italic></sub><italic>}</italic>.</p>
<p>The process of pattern with speech signals is internally correlated to the amplitude of the signal <italic>(amp)</italic>. Where it is represented as <italic>f</italic><sub><italic>amp</italic></sub> =<italic>{f</italic><sub><italic>amp</italic>1</sub><italic>,f</italic><sub><italic>amp</italic>2</sub><italic>,f</italic><sub><italic>amp</italic>3</sub><italic>,&#x02026;..,f</italic><sub><italic>ampn</italic></sub><italic>}</italic>. The amplitude of each frequency feature can be represented and extracted as shown in Equation (5).</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M5"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x0222B;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where each signal pattern <italic>(S</italic><sub><italic>n</italic></sub><italic>)</italic> represents the overall coordination in speech signals, and the &#x02018;&#x00027; represents band filters of the speech signal with a coefficient of amplitude and frequency. On extraction of patterns from those correlated in Equation (5), the frequency patterns can be sorted by independent bandwidth as shown in Equation (6).</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M6"><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>R</mml:mi><mml:mo>.</mml:mo><mml:mo>&#x00394;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:mfrac><mml:mo>&#x0007B;</mml:mo><mml:mo stretchy='true'>(</mml:mo><mml:mi>log</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>S</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:msup><mml:mo stretchy='true'>)</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo>&#x0007D;</mml:mo></mml:mrow></mml:mrow></mml:mstyle></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<disp-formula id="E7"><label>(7)</label><mml:math id="M7"><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>R</mml:mi><mml:mo>.</mml:mo><mml:mo>&#x00394;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:mfrac><mml:mo>&#x0007B;</mml:mo><mml:mo stretchy='true'>(</mml:mo><mml:mi>log</mml:mi><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>.</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mfrac><mml:mrow><mml:msup><mml:mo>&#x02202;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>S</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:mrow></mml:mstyle><mml:mo stretchy='true'>)</mml:mo><mml:mo>&#x0007D;</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>The &#x02018;<italic>P</italic><sub><italic>i</italic></sub>&#x00027; on Equation 7 is the pattern of repeated learning from the CNN framework. The internal arrangements can be represented as the frequency <italic>(f)</italic> under the operation of amplitude, (mode) is represented as <italic>f</italic><sub><italic>amp</italic></sub>, further graded into the Gaussian constant <italic>(G)</italic>. The process in Equation (7) is then concluded, as shown in Equation (8).</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M8"><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>R</mml:mi><mml:mo>.</mml:mo><mml:mo>&#x00394;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:mfrac><mml:mo>&#x0007B;</mml:mo><mml:mo stretchy='true'>(</mml:mo><mml:mn>2</mml:mn><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mfrac><mml:mrow><mml:msup><mml:mo>&#x02202;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy='true'>(</mml:mo><mml:msubsup><mml:mrow><mml:msup><mml:mstyle mathsize='140%' displaystyle='true'><mml:mo>&#x02211;</mml:mo></mml:mstyle><mml:mtext>&#x000A0;</mml:mtext></mml:msup></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:msubsup><mml:mrow><mml:msup><mml:mstyle mathsize='140%' displaystyle='true'><mml:mo>&#x02211;</mml:mo></mml:mstyle><mml:mtext>&#x000A0;</mml:mtext></mml:msup></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:msub><mml:mi>&#x003B1;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='true'>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac><mml:mo stretchy='true'>)</mml:mo><mml:mo>&#x0007D;</mml:mo></mml:mrow></mml:mrow></mml:mstyle></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<disp-formula id="E9"><label>(9)</label><mml:math id="M9"><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>4</mml:mn><mml:mi>&#x003C0;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>R</mml:mi><mml:mo>.</mml:mo><mml:mo>&#x00394;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:mfrac><mml:mo>&#x0007B;</mml:mo><mml:mo stretchy='true'>(</mml:mo><mml:mi>log</mml:mi><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mfrac><mml:mrow><mml:msup><mml:mo>&#x02202;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy='true'>(</mml:mo><mml:msubsup><mml:mrow><mml:msup><mml:mstyle mathsize='140%' displaystyle='true'><mml:mo>&#x02211;</mml:mo></mml:mstyle><mml:mtext>&#x000A0;</mml:mtext></mml:msup></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:mo>&#x02329;</mml:mo><mml:msubsup><mml:mrow><mml:msup><mml:mstyle mathsize='140%' displaystyle='true'><mml:mo>&#x02211;</mml:mo></mml:mstyle><mml:mtext>&#x000A0;</mml:mtext></mml:msup></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:msub><mml:mi>&#x003B1;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0232A;</mml:mo><mml:mo stretchy='true'>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac><mml:mo stretchy='true'>)</mml:mo><mml:mo>&#x0007D;</mml:mo></mml:mrow></mml:mrow></mml:mstyle></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<disp-formula id="E10"><label>(10)</label><mml:math id="M10"><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>4</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mo>.</mml:mo><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>g</mml:mi><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x00394;</mml:mo><mml:mi>R</mml:mi><mml:mo>.</mml:mo><mml:mo>&#x00394;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>&#x0007B;</mml:mo><mml:mo stretchy='true'>(</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mfrac><mml:mrow><mml:msup><mml:mo>&#x02202;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy='true'>(</mml:mo><mml:msubsup><mml:mrow><mml:msup><mml:mstyle mathsize='140%' displaystyle='true'><mml:mo>&#x02211;</mml:mo></mml:mstyle><mml:mtext>&#x000A0;</mml:mtext></mml:msup></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:mo>&#x0007B;</mml:mo><mml:msubsup><mml:mrow><mml:msup><mml:mstyle mathsize='140%' displaystyle='true'><mml:mo>&#x02211;</mml:mo></mml:mstyle><mml:mtext>&#x000A0;</mml:mtext></mml:msup></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:mo>&#x02329;</mml:mo><mml:msub><mml:mi>&#x003B1;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0232A;</mml:mo><mml:mo>&#x0007D;</mml:mo><mml:mo stretchy='true'>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:mrow></mml:mstyle><mml:mo stretchy='true'>)</mml:mo><mml:mo>&#x0007D;</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>Thus, Equation (10) represents the coordinates of the pattern extracted and validated for the pattern with respect to a segment <italic>(S</italic><sub><italic>i</italic></sub><italic>)</italic>. Thus, on summarisation, the representation can be as <italic>P</italic> = <italic>{P</italic><sub>1</sub><italic>,P</italic><sub>2</sub><italic>,P</italic><sub>3</sub><italic>,&#x02026;.,P</italic><sub><italic>n</italic></sub><italic>}</italic> co-related to coordination of segment as <italic>S</italic><sub><italic>p</italic></sub>=<italic>{S</italic><sub><italic>p</italic>1</sub><italic>,S</italic><sub><italic>p</italic>2</sub><italic>,S</italic><sub><italic>p</italic>3</sub><italic>,&#x02026;&#x02026;,S</italic><sub><italic>pn</italic></sub><italic>}</italic>, where &#x02018;<italic>n&#x00027;</italic> is the last segment of given input signal.</p>
</sec>
<sec>
<title>Clustering and Classification of Datasets</title>
<p>Equation (10) retrieves the pattern of individual segments, and thus the coefficient of such segments are summarized and represented in <italic>S</italic><sub><italic>p</italic></sub>=<italic>{S</italic><sub><italic>p</italic>1</sub><italic>,S</italic><sub><italic>p</italic>2</sub><italic>,S</italic><sub><italic>p</italic>3</sub><italic>,&#x02026;&#x02026;,S</italic><sub><italic>pn</italic></sub><italic>}</italic>. Hence the clustering is shown in <xref ref-type="fig" rid="F4">Figure 4</xref>.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Cluster representation of extracted patterns.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpubh-09-781827-g0004.tif"/>
</fig>
<p>The cluster <italic>(C)</italic> is retained from a group of values and its corresponding coefficients for the value re-compensation. The clusters are internally evaluated with the focus of associating.</p>
<disp-formula id="E11"><label>(11)</label><mml:math id="M11"><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mrow><mml:mo>&#x0007B;</mml:mo><mml:msubsup><mml:mrow><mml:msup><mml:mstyle mathsize='140%' displaystyle='true'><mml:mo>&#x02211;</mml:mo></mml:mstyle><mml:mtext>&#x000A0;</mml:mtext></mml:msup></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:mo stretchy='true'>[</mml:mo><mml:msubsup><mml:mrow><mml:msup><mml:mstyle mathsize='140%' displaystyle='true'><mml:mo>&#x02211;</mml:mo></mml:mstyle><mml:mtext>&#x000A0;</mml:mtext></mml:msup></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:mo stretchy='true'>(</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:msub><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>P</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x02202;</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo stretchy='true'>)</mml:mo><mml:mo>.</mml:mo><mml:mo>&#x00394;</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='true'>]</mml:mo><mml:mo>&#x0007D;</mml:mo></mml:mrow></mml:mrow></mml:mstyle></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>Where each cluster <italic>(C</italic><sub><italic>i</italic></sub><italic>)</italic> is validated with a corresponding pattern coefficient and a threshold value <italic>(</italic>&#x00394;<italic>T)</italic>. The internal Threshold value <italic>(</italic>&#x00394;<italic>T)</italic> is validated and evaluated. In summary, the clusters <italic>(C)</italic> = <italic>{C</italic><sub>1</sub><italic>,C</italic><sub>2</sub><italic>,C</italic><sub>3</sub><italic>,&#x02026;..,C</italic><sub><italic>n</italic></sub><italic>}</italic>. These clusters have an association of common patterns, for example, represented as <italic>{(C</italic><sub><italic>i</italic></sub>&#x02229; <italic>C</italic><sub><italic>j</italic></sub><italic>)</italic> &#x02229; <italic>C</italic><sub><italic>k</italic></sub><italic>}</italic>, and these associations are subjected to attribute validation, as shown in <xref ref-type="fig" rid="F4">Figure 4</xref>.</p>
</sec>
<sec>
<title>Threshold Validation and Decision Making</title>
<p>The clusters and classification of speech signals using clusters are validated and approved for processing into decision making. The decision-making approach is termed with a threshold value consultation, i.e., the overall technique extracts the validated pattern coefficient and thereby synchronizes it with a relatively more and likely approach of matching and schema validation. The proposed approach typically validates the decision of signal segmentation using the threshold value toward segregating the dataset of speech signals based on emotions. These emotion-based evaluations are rather computational, and hence a most likely decision is processed.</p>
</sec>
</sec>
<sec id="s5">
<title>Results and Discussions</title>
<p>The proposed technique has successfully retrieved the signal attributes and the prediction ratio for evaluation. The input signals from the users via a remote connecting platform are uploaded to a centralized database in a cloud computing ecosystem using AWS-sponsored services. The datasets are processed and validated according to a multidimensional approach.</p>
<p>The variation of predicting the sentiments is based on the information designed and developed via clustering datasets. The prediction ratio is summarized in <xref ref-type="fig" rid="F5">Figures 5</xref>, <xref ref-type="fig" rid="F6">6</xref>, respectively, with a comparative evaluation from previous systems. <xref ref-type="table" rid="T1">Table 1</xref> shows the parameters related to the mental stress and paradigms to provide decision support. The table highlights the evaluation parameters such as the occurrence delay of a keyword in clustering, as shown in Equation (11). The supported approach thus classifies the pattern of these keyword occurrence sequences for decision making.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Performance computation of proposed technique on independent parameters.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpubh-09-781827-g0005.tif"/>
</fig>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Outcome evaluation of proposed technique.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpubh-09-781827-g0006.tif"/>
</fig>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Decision support and evaluation parameters.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Patient relevance (sentiments)</bold></th>
<th valign="top" align="center"><bold>Cluster range</bold></th>
<th valign="top" align="center"><bold>Occurrence difference (Avg. sec)</bold></th>
<th valign="top" align="center"><bold>Pattern differences</bold></th>
<th valign="top" align="center"><bold>Decision accuracy (%)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">COVID&#x02212;19 (positive)</td>
<td valign="top" align="center">Average</td>
<td valign="top" align="center">3.211</td>
<td valign="top" align="center">0.327</td>
<td valign="top" align="center">97.23</td>
</tr>
<tr>
<td valign="top" align="left">Post COVID&#x02212;19 (positive)</td>
<td valign="top" align="center">High</td>
<td valign="top" align="center">1.432</td>
<td valign="top" align="center">0.129</td>
<td valign="top" align="center">94.92</td>
</tr>
<tr>
<td valign="top" align="left">Loneliness</td>
<td valign="top" align="center">High</td>
<td valign="top" align="center">1.328</td>
<td valign="top" align="center">0.091</td>
<td valign="top" align="center">96.91</td>
</tr>
<tr>
<td valign="top" align="left">Anxiety</td>
<td valign="top" align="center">Average</td>
<td valign="top" align="center">2.114</td>
<td valign="top" align="center">0.181</td>
<td valign="top" align="center">92.17</td>
</tr>
<tr>
<td valign="top" align="left">Depression</td>
<td valign="top" align="center">High</td>
<td valign="top" align="center">0.994</td>
<td valign="top" align="center">0.021</td>
<td valign="top" align="center">97.28</td>
</tr>
<tr>
<td valign="top" align="left">Normal (Non-COVID19)</td>
<td valign="top" align="center">Low</td>
<td valign="top" align="center">5.251</td>
<td valign="top" align="center">0.448</td>
<td valign="top" align="center">95.39</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The results of data/signal processing and decision-making are shown in <xref ref-type="table" rid="T2">Table 2</xref>. The results show promising outcomes in proving a precision of 90% and higher in various users across the language and location. The results of processing a single sample are included in <xref ref-type="table" rid="T3">Table 3</xref>. The processing signal magnitude and the power spectrum computation demonstrate a higher order of signal clarity in analysis and validation.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Performance matrix for speech signal in mental distress validation.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Sample (Age group) (yrs)</bold></th>
<th valign="top" align="center"><bold>Approach</bold></th>
<th valign="top" align="center"><bold>Precision (%)</bold></th>
<th valign="top" align="center"><bold>Recall (%)</bold></th>
<th valign="top" align="center"><bold>Detection-score (%)</bold></th>
<th valign="top" align="center"><bold>Accuracy (%)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">5&#x02013;9</td>
<td valign="top" align="center">Unclassified</td>
<td valign="top" align="center">92.43</td>
<td valign="top" align="center">89.23</td>
<td valign="top" align="center">89.3</td>
<td valign="top" align="center">90</td>
</tr>
<tr>
<td valign="top" align="left">10&#x02013;15</td>
<td valign="top" align="center">Unclassified</td>
<td valign="top" align="center">95.72</td>
<td valign="top" align="center">93.22</td>
<td valign="top" align="center">93.08</td>
<td valign="top" align="center">95</td>
</tr>
<tr>
<td valign="top" align="left">16&#x02013;25</td>
<td valign="top" align="center">Classified</td>
<td valign="top" align="center">97.12</td>
<td valign="top" align="center">97.48</td>
<td valign="top" align="center">98.1</td>
<td valign="top" align="center">98.48</td>
</tr>
<tr>
<td valign="top" align="left">26&#x02013;35</td>
<td valign="top" align="center">Classified</td>
<td valign="top" align="center">97.78</td>
<td valign="top" align="center">97.53</td>
<td valign="top" align="center">98.9</td>
<td valign="top" align="center">98.97</td>
</tr>
<tr>
<td valign="top" align="left">36&#x02013;55</td>
<td valign="top" align="center">Classified</td>
<td valign="top" align="center">98.3</td>
<td valign="top" align="center">97.12</td>
<td valign="top" align="center">99.12</td>
<td valign="top" align="center">99.23</td>
</tr>
<tr>
<td valign="top" align="left">56&#x02013;60</td>
<td valign="top" align="center">Classified</td>
<td valign="top" align="center">95.33</td>
<td valign="top" align="center">91.23</td>
<td valign="top" align="center">97.12</td>
<td valign="top" align="center">94.19</td>
</tr>
<tr>
<td valign="top" align="left">61&#x02013;75</td>
<td valign="top" align="center">Unclassified</td>
<td valign="top" align="center">96.12</td>
<td valign="top" align="center">89.23</td>
<td valign="top" align="center">91.07</td>
<td valign="top" align="center">88.32</td>
</tr>
<tr>
<td valign="top" align="left">76&#x02013;100</td>
<td valign="top" align="center">Unclassified</td>
<td valign="top" align="center">92.31</td>
<td valign="top" align="center">88.7</td>
<td valign="top" align="center">87.2</td>
<td valign="top" align="center">81.2</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Signal processing and analysis appendix.</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td valign="top" align="left" colspan="3"><inline-graphic xlink:href="fpubh-09-781827-i0001.tif"/></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec sec-type="conclusions" id="s6">
<title>Conclusion</title>
<p>The technique proposed in the present study uses neural networking terminology to learn and develop a pool of clusters and patterns to provide a systematic and reliable decision to categorize speech signals. The processing system is based on open database processing to validate the mental health conditions of users during the ongoing isolation and lockdowns caused by the COVID-19 pandemic. The results show a promising outcome with a precision of 90% and higher accuracy across various users. The approach has a projected accuracy of 93.5% under the open validation platform on a computational evaluation. The proposed technique could be included in classifying and categorizing patients&#x00027; behavior in future, with supervised approaches to keyword extraction and classification in dynamic signals.</p>
</sec>
<sec sec-type="data-availability" id="s7">
<title>Data Availability Statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.</p>
</sec>
<sec id="s8">
<title>Author Contributions</title>
<p>Conceptualization and writing: SA, DS, SB, EA, and MA. Methodology: SA, DS, SB, AK, and MA. Investigation and programming: SA and DS. Resources: SB, EA, MA, and AK. Review: EA and AK. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec sec-type="funding-information" id="s9">
<title>Funding</title>
<p>The authors extend their appreciation to King Saud University for funding this work through Researchers Supporting Project number (RSP-2021/164), King Saud University, Riyadh, Saudi Arabia.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>ST</given-names></name> <name><surname>Basha</surname> <given-names>SM</given-names></name> <name><surname>Arumugam</surname> <given-names>SR</given-names></name> <name><surname>Kodabagi</surname> <given-names>MM</given-names></name></person-group>. <source>Pattern Recognition: An Introduction</source>. <publisher-name>MileStone Research Publications</publisher-name> (<year>2021</year>).</citation>
</ref>
<ref id="B2">
<label>2.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lin</surname> <given-names>C</given-names></name> <name><surname>Ibeke</surname> <given-names>E</given-names></name> <name><surname>Wyner</surname> <given-names>A</given-names></name> <name><surname>Guerin</surname> <given-names>F</given-names></name></person-group>. <article-title>Sentiment&#x02013;topic modeling in text mining</article-title>. <source>Wiley Interdisc Rev.</source> (<year>2015</year>) <volume>5</volume>:<fpage>246</fpage>&#x02013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1002/widm.1161</pub-id><pub-id pub-id-type="pmid">25855820</pub-id></citation></ref>
<ref id="B3">
<label>3.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Singh</surname> <given-names>S</given-names></name> <name><surname>Roy</surname> <given-names>MD</given-names></name> <name><surname>Sinha</surname> <given-names>CPT</given-names></name> <name><surname>Parveen</surname> <given-names>CPTMS</given-names></name> <name><surname>Joshi</surname> <given-names>CPTG</given-names></name></person-group>. <article-title>Impact of COVID-19 and lockdown on mental health of children and adolescents: a narrative review with recommendations</article-title>. <source>Psychiatry Res.</source> (<year>2020</year>) <volume>2020</volume>:<fpage>113429</fpage>. <pub-id pub-id-type="doi">10.1016/j.psychres.2020.113429</pub-id><pub-id pub-id-type="pmid">32882598</pub-id></citation></ref>
<ref id="B4">
<label>4.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Naik</surname> <given-names>PA</given-names></name> <name><surname>Owolabi</surname> <given-names>KM</given-names></name> <name><surname>Zu</surname> <given-names>J</given-names></name> <name><surname>Naik</surname> <given-names>MUD</given-names></name></person-group>. <article-title>Modeling the transmission dynamics of COVID-19 pandemic in caputo type fractional derivative</article-title>. <source>J Multisc Model.</source> (<year>2021</year>) <volume>20</volume>:<fpage>1</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1142/S1756973721500062</pub-id></citation>
</ref>
<ref id="B5">
<label>5.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfefferbaum</surname> <given-names>B</given-names></name> <name><surname>North</surname> <given-names>CS</given-names></name></person-group>. <article-title>Mental health and the Covid-19 pandemic</article-title>. <source>N Engl J Med.</source> (<year>2020</year>) <volume>383</volume>:<fpage>510</fpage>&#x02013;<lpage>2</lpage>. <pub-id pub-id-type="doi">10.1056/NEJMp2008017</pub-id><pub-id pub-id-type="pmid">32283003</pub-id></citation></ref>
<ref id="B6">
<label>6.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spoorthy</surname> <given-names>MS</given-names></name> <name><surname>Pratapa</surname> <given-names>SK</given-names></name> <name><surname>Mahant</surname> <given-names>S</given-names></name></person-group>. <article-title>Mental health problems faced by healthcare workers due to the COVID-19 pandemic&#x02013;A review</article-title>. <source>Asian J Psychiatry.</source> (<year>2020</year>) <volume>51</volume>:<fpage>102119</fpage>. <pub-id pub-id-type="doi">10.1016/j.ajp.2020.102119</pub-id><pub-id pub-id-type="pmid">32339895</pub-id></citation></ref>
<ref id="B7">
<label>7.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cullen</surname> <given-names>W</given-names></name> <name><surname>Gulati</surname> <given-names>G</given-names></name> <name><surname>Kelly</surname> <given-names>BD</given-names></name></person-group>. <article-title>Mental health in the Covid-19 pandemic</article-title>. <source>QJM Int J Med.</source> (<year>2020</year>) <volume>113</volume>:<fpage>311</fpage>&#x02013;<lpage>2</lpage>. <pub-id pub-id-type="doi">10.1093/qjmed/hcaa110</pub-id><pub-id pub-id-type="pmid">32227218</pub-id></citation></ref>
<ref id="B8">
<label>8.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iwendi</surname> <given-names>C</given-names></name> <name><surname>Mahboob</surname> <given-names>K</given-names></name> <name><surname>Khalid</surname> <given-names>Z</given-names></name> <name><surname>Javed</surname> <given-names>AR</given-names></name> <name><surname>Rizwan</surname> <given-names>M</given-names></name> <name><surname>Ghosh</surname> <given-names>U</given-names></name></person-group>. <article-title>Classification of COVID-19 individuals using adaptive neuro-fuzzy inference system</article-title>. <source>Multi Syst.</source> (<year>2021</year>) <fpage>1</fpage>&#x02013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1007/s00530-021-00774-w</pub-id><pub-id pub-id-type="pmid">33814730</pub-id></citation></ref>
<ref id="B9">
<label>9.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>ST</given-names></name> <name><surname>Sankar</surname> <given-names>S</given-names></name> <name><surname>Sandhya</surname> <given-names>M</given-names></name></person-group>. <article-title>Multi-objective optimal medical data informatics standardization and processing technique for telemedicine via machine learning approach</article-title>. <source>J Amb Intell Hum Comput.</source> (<year>2020</year>) <volume>12</volume>:<fpage>5349</fpage>&#x02013;<lpage>58</lpage>. <pub-id pub-id-type="doi">10.1007/s12652-020-02016-9</pub-id></citation>
</ref>
<ref id="B10">
<label>10.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>ST</given-names></name> <name><surname>Sandhya</surname> <given-names>M</given-names></name> <name><surname>Sankar</surname> <given-names>S</given-names></name></person-group>. <article-title>An optimized RTSRV machine learning algorithm for biomedical signal transmission and regeneration for telemedicine environment</article-title>. <source>Proc Comp Sci.</source> (<year>2019</year>) <volume>152</volume>:<fpage>140</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1016/j.procs.2019.05.036</pub-id></citation>
</ref>
<ref id="B11">
<label>11.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ayoub</surname> <given-names>A</given-names></name> <name><surname>Mahboob</surname> <given-names>K</given-names></name> <name><surname>Javed</surname> <given-names>AR</given-names></name> <name><surname>Rizwan</surname> <given-names>M</given-names></name> <name><surname>Gadekallu</surname> <given-names>TR</given-names></name> <name><surname>Abidi</surname> <given-names>MH</given-names></name> <etal/></person-group>. <article-title>Classification and categorization of covid-19 outbreak in Pakistan</article-title>. <source>Comput Mater Continua.</source> (<year>2021</year>) <volume>69</volume>:<fpage>1253</fpage>&#x02013;<lpage>69</lpage>. <pub-id pub-id-type="doi">10.32604/cmc.2021.015655</pub-id><pub-id pub-id-type="pmid">34464419</pub-id></citation></ref>
<ref id="B12">
<label>12.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kumar</surname> <given-names>SS</given-names></name> <name><surname>Ahmed</surname> <given-names>ST</given-names></name> <name><surname>Vigneshwaran</surname> <given-names>P</given-names></name> <name><surname>Sandeep</surname> <given-names>H</given-names></name> <name><surname>Singh</surname> <given-names>HM</given-names></name></person-group>. <article-title>Two phase cluster validation approach towards measuring cluster quality in unstructured and structured numerical datasets</article-title>. <source>J Ambient Intell Hum Comput.</source> (<year>2020</year>) <volume>12</volume>:<fpage>7581</fpage>&#x02013;<lpage>594</lpage>. <pub-id pub-id-type="doi">10.1007/s12652-020-02487-w</pub-id></citation>
</ref>
<ref id="B13">
<label>13.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iwendi</surname> <given-names>C</given-names></name> <name><surname>Bashir</surname> <given-names>AK</given-names></name> <name><surname>Peshkar</surname> <given-names>A</given-names></name> <name><surname>Sujatha</surname> <given-names>R</given-names></name> <name><surname>Chatterjee</surname> <given-names>JM</given-names></name> <name><surname>Pasupuleti</surname> <given-names>S</given-names></name> <etal/></person-group>. <article-title>Jo O. COVID-19 patient health prediction using boosted random forest algorithm</article-title>. <source>Front Public Health.</source> (<year>2020</year>) <volume>8</volume>:<fpage>357</fpage>. <pub-id pub-id-type="doi">10.3389/fpubh.2020.00357</pub-id><pub-id pub-id-type="pmid">32719767</pub-id></citation></ref>
<ref id="B14">
<label>14.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ngabo</surname> <given-names>D</given-names></name> <name><surname>Dong</surname> <given-names>W</given-names></name> <name><surname>Ibeke</surname> <given-names>E</given-names></name> <name><surname>Iwendi</surname> <given-names>C</given-names></name> <name><surname>Masabo</surname> <given-names>E</given-names></name></person-group>. <article-title>Tackling pandemics in smart cities using machine learning architecture</article-title>. <source>Math Biosci Eng</source>. (<year>2021</year>) <volume>18</volume>:<fpage>8444</fpage>&#x02013;<lpage>61</lpage>. <pub-id pub-id-type="doi">10.3934/mbe.2021418</pub-id><pub-id pub-id-type="pmid">34814307</pub-id></citation></ref>
<ref id="B15">
<label>15.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>ST</given-names></name> <name><surname>Sandhya</surname> <given-names>M</given-names></name> <name><surname>Sankar</surname> <given-names>S</given-names></name></person-group>. <article-title>TelMED: dynamic user clustering resource allocation technique for MooM datasets under optimizing telemedicine network</article-title>. <source>Wireless Pers Commun.</source> (<year>2020</year>) <volume>112</volume>:<fpage>1061</fpage>&#x02013;<lpage>77</lpage>. <pub-id pub-id-type="doi">10.1007/s11277-020-07091-x</pub-id></citation>
</ref>
<ref id="B16">
<label>16.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Al-Shammari</surname> <given-names>NK</given-names></name> <name><surname>Syed</surname> <given-names>TH</given-names></name> <name><surname>Syed</surname> <given-names>MB</given-names></name></person-group>. <article-title>An Edge&#x02013;IoT framework and prototype based on blockchain for smart healthcare applications</article-title>. <source>Eng Technol Appl Sci Res.</source> (<year>2021</year>) <volume>11</volume>:<fpage>7326</fpage>&#x02013;<lpage>31</lpage>. <pub-id pub-id-type="doi">10.48084/etasr.4245</pub-id></citation>
</ref>
<ref id="B17">
<label>17.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>ST</given-names></name> <name><surname>Sankar</surname> <given-names>S</given-names></name></person-group>. <article-title>Investigative protocol design of layer optimized image compression in telemedicine environment</article-title>. <source>Proc Comput Sci.</source> (<year>2020</year>) <volume>167</volume>:<fpage>2617</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1016/j.procs.2020.03.323</pub-id></citation>
</ref>
<ref id="B18">
<label>18.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bhattacharya</surname> <given-names>S</given-names></name> <name><surname>Maddikunta</surname> <given-names>PKR</given-names></name> <name><surname>Pham</surname> <given-names>QV</given-names></name> <name><surname>Gadekallu</surname> <given-names>TR</given-names></name> <name><surname>Chowdhary</surname> <given-names>CL</given-names></name> <name><surname>Alazab</surname> <given-names>M</given-names></name> <etal/></person-group>. <article-title>Deep learning and medical image processing for coronavirus (COVID-19) pandemic: a survey</article-title>. <source>Sustain Cities Soc.</source> (<year>2021</year>) <volume>65</volume>:<fpage>102589</fpage>. <pub-id pub-id-type="doi">10.1016/j.scs.2020.102589</pub-id><pub-id pub-id-type="pmid">33169099</pub-id></citation></ref>
<ref id="B19">
<label>19.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dhanamjayulu</surname> <given-names>C</given-names></name> <name><surname>Nizhal</surname> <given-names>UN</given-names></name> <name><surname>Maddikunta</surname> <given-names>PKR</given-names></name> <name><surname>Gadekallu</surname> <given-names>TR</given-names></name> <name><surname>Iwendi</surname> <given-names>C</given-names></name> <name><surname>Wei</surname> <given-names>C</given-names></name> <etal/></person-group>. <article-title>Identification of malnutrition and prediction of BMI from facial images using real-time image processing and machine learning</article-title>. <source>IET Image Process.</source> (<year>2021</year>) <pub-id pub-id-type="doi">10.1049/IPR2.12222</pub-id></citation>
</ref>
<ref id="B20">
<label>20.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reddy</surname> <given-names>PK</given-names></name> <name><surname>Reddy</surname> <given-names>TS</given-names></name> <name><surname>Balakrishnan</surname> <given-names>S</given-names></name> <name><surname>Basha</surname> <given-names>SM</given-names></name> <name><surname>Poluru</surname> <given-names>RK</given-names></name></person-group>. <article-title>Heart disease prediction using machine learning algorithm</article-title>. <source>Int J Innov Technol Expl Eng.</source> (<year>2019</year>) <volume>8</volume>:<fpage>2603</fpage>&#x02013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.35940/ijitee.J9340.0881019</pub-id></citation>
</ref>
</ref-list>
</back>
</article>