<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="editorial">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Comput. Sci.</journal-id>
<journal-title>Frontiers in Computer Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Comput. Sci.</abbrev-journal-title>
<issn pub-type="epub">2624-9898</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fcomp.2022.937433</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Computer Science</subject>
<subj-group>
<subject>Editorial</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Editorial: Machine Vision for Assistive Technologies</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Leo</surname> <given-names>Marco</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1153157/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Farinella</surname> <given-names>Giovanni Maria</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/134503/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Furnari</surname> <given-names>Antonino</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1133586/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Medioni</surname> <given-names>Gerard</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1153038/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy</institution>, <addr-line>Lecce</addr-line>, <country>Italy</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Mathematics and Computer Science, University of Catania</institution>, <addr-line>Catania</addr-line>, <country>Italy</country></aff>
<aff id="aff3"><sup>3</sup><institution>Institute of Robotics and Intelligent Systems, University of Southern California</institution>, <addr-line>Los Angeles, CA</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited and reviewed by: Marcello Pelillo, Ca&#x00027; Foscari University of Venice, Italy</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Marco Leo <email>marco.leo&#x00040;cnr.it</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Computer Vision, a section of the journal Frontiers in Computer Science</p></fn></author-notes>
<pub-date pub-type="epub">
<day>26</day>
<month>05</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>4</volume>
<elocation-id>937433</elocation-id>
<history>
<date date-type="received">
<day>06</day>
<month>05</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>16</day>
<month>05</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 Leo, Farinella, Furnari and Medioni.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Leo, Farinella, Furnari and Medioni</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<related-article id="RA1" related-article-type="commentary-article" xlink:href="https://www.frontiersin.org/research-topics/18108/machine-vision-for-assistive-technologies" ext-link-type="uri">Editorial on the Research Topic <article-title>Machine Vision for Assistive Technologies</article-title>
</related-article>
<kwd-group>
<kwd>assistive technologies</kwd>
<kwd>egocentric vision</kwd>
<kwd>industrial applications</kwd>
<kwd>human-robot interaction</kwd>
<kwd>symbiotic human-machine systems</kwd>
<kwd>editorial</kwd>
</kwd-group>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="5"/>
<page-count count="2"/>
<word-count count="1159"/>
</counts>
</article-meta>
</front>
<body>
<p>The last decade has witnessed the significant impact of Computer Vision and Robotics on real-world products. The traditional Computer Vision problems such as tracking, 3D reconstruction, detection, recognition, odometry, navigation, and ultimately, are now solved with significantly higher accuracy using Machine Learning (Farinella et al., <xref ref-type="bibr" rid="B1">2020</xref>). However, most of these results have focused on constrained application scenarios that do not involve the integration of feedback from the user (Leo et al., <xref ref-type="bibr" rid="B4">2019</xref>). Since these applications do not consider the user&#x00027;s intentions and goals, they tend to be of limited use when it is necessary to assist humans.</p>
<p>With the pervasive successes of Computer Vision and Robotics and the advent of industry 4.0, it has become paramount to design systems that can truly assist humans and augment their abilities to tackle both physical and intellectual tasks. We broadly refer to such systems as &#x0201C;assistive technologies&#x0201D; (Leo et al., <xref ref-type="bibr" rid="B5">2017</xref>). Examples of these technologies include approaches to assist visually impaired people to navigate and perceive the world, wearable devices which make use of artificial intelligence, mixed and augmented reality to improve perception and bring computation directly to the user, and systems designed to aid industrial processes and improve the safety of workers (Leo and Farinella, <xref ref-type="bibr" rid="B3">2018</xref>). These technologies need to consider an operational paradigm in which the user is central and can both influence and be influenced by the system. Despite some examples of this approach exist (Fosch-Villaronga et al., <xref ref-type="bibr" rid="B2">2021</xref>), implementing applications according to this &#x0201C;human-in-the-loop&#x0201D; scenario still requires a lot of effort to reach an adequate level of reliability and introduces challenging satellite issues related to usability, privacy, and acceptability.</p>
<p>The main aim of this Research Topic was to gather contributions from the diverse fields of engineering and computer science in the context of technologies involving Computer Vision and Robotics related to real-time continuous assistance and support of humans while performing any task.</p>
<p>At the end of a double-blind review process that involved distinguished researchers from industry and academia, four papers were accepted.</p>
<p>The first paper (sorted by acceptance date) is titled &#x0201C;<italic>Communicating Photograph Content Through Tactile Images to People With Visual Impairments</italic> (<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fcomp.2021.787735">Pakenaite et al.</ext-link>).&#x0201D; It introduces an approach to make visual content accessible <italic>via</italic> touch. State-of-the-art algorithms are used to automatically process an input photograph into a collage of icons that depict the most important semantic aspects of a scene. This collage is then printed onto swell paper allowing this way visually impaired people to access photographs and better enjoy books, tourist brochures, etc.</p>
<p>The paper &#x0201C;<italic>Deep-Learning-Based Cerebral Artery Semantic Segmentation in Neurosurgical Operating Microscope Vision Using Indocyanine Green Fluorescence Videoangiography</italic>&#x0201D; proved the feasibility of operating field view of the cerebral artery segmentation using deep learning, and the effectiveness of the automatic blood vessel ground truth generation method using ICG fluorescence videoangiography (<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnbot.2021.735177">Kim et al.</ext-link>).</p>
<p>The paper &#x0201C;<italic>Environment Classification for Robotic Leg Prostheses and Exoskeletons Using Deep Convolutional Neural Networks</italic>&#x0201D; deals with Robotic leg prostheses and exoskeletons that can provide powered locomotor assistance to older adults and/or persons with physical disabilities (<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnbot.2021.730965">Laschowski et al.</ext-link>). Inspired by the human vision-locomotor control system, the authors developed an environment classification system powered by computer vision and deep learning to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions.</p>
<p>The last paper &#x0201C;<italic>Recognition and Classification of Ship Images Based on SMS-PCNN Model</italic>&#x0201D; lies in the field of ship images recognition and classification (<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnbot.2022.889308">Wang et al.</ext-link>). In order to extract the ship features of different scales, the authors proposed a Multi-Scale paralleling CNN that has three characteristics. (1) Extracting image features of different sizes by parallelizing convolutional branches with different receptive fields; (2) the number of channels of the model is adjusted twice to extract features and eliminate redundant information; (3) The residual connection network is used to extend the network depth and mitigate the gradient disappearance.</p>
<sec id="s1">
<title>Author Contributions</title>
<p>All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s2">
<title>Publisher&#x00027;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Farinella</surname> <given-names>G. M.</given-names></name> <name><surname>Leo</surname> <given-names>M.</given-names></name> <name><surname>Medioni</surname> <given-names>G.</given-names></name> <name><surname>Trivedi</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Learning and recognition for assistive computer vision</article-title>. <source>Pattern Recogn. Lett.</source> <volume>137</volume>, <fpage>1</fpage>&#x02013;<lpage>2</lpage>. <pub-id pub-id-type="doi">10.1016/j.patrec.2019.11.006</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fosch-Villaronga</surname> <given-names>E.</given-names></name> <name><surname>Khanna</surname> <given-names>P.</given-names></name> <name><surname>Drukarch</surname> <given-names>H.</given-names></name> <name><surname>Custers</surname> <given-names>B. H.</given-names></name></person-group> (<year>2021</year>). <article-title>A human in the loop in surgery automation</article-title>. <source>Nat. Mach. Intell.</source> <volume>3</volume>, pp.<fpage>368</fpage>&#x02013;<lpage>369</lpage>. <pub-id pub-id-type="doi">10.1038/s42256-021-00349-4</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Leo</surname> <given-names>M.</given-names></name> <name><surname>Farinella</surname> <given-names>G. M.</given-names></name></person-group> (<year>2018</year>). <source>Computer Vision for Assistive Healthcare</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Academic Press</publisher-name>.</citation>
</ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Leo</surname> <given-names>M.</given-names></name> <name><surname>Furnari</surname> <given-names>A.</given-names></name> <name><surname>Medioni</surname> <given-names>G. G.</given-names></name> <name><surname>Trivedi</surname> <given-names>M.</given-names></name> <name><surname>Farinella</surname> <given-names>G. M.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Deep learning for assistive computer vision,&#x0201D;</article-title> in <source>Computer Vision &#x02013; ECCV 2018 Workshops. Lecture Notes in Computer Science, Vol. 11134</source>, eds L. Leal-Taix&#x000E9;, and S. Roth (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>). <pub-id pub-id-type="doi">10.1007/978-3-030-11024-6_1</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leo</surname> <given-names>M.</given-names></name> <name><surname>Medioni</surname> <given-names>G.</given-names></name> <name><surname>Trivedi</surname> <given-names>M.</given-names></name> <name><surname>Kanade</surname> <given-names>K.</given-names></name> <name><surname>Farinella</surname> <given-names>G. M.</given-names></name></person-group> (<year>2017</year>). <article-title>Computer vision for assistive technologies</article-title>. <source>Comp. Vis. Image Understand.</source> <volume>154</volume>, <fpage>1</fpage>&#x02013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1016/j.cviu.2016.09.001</pub-id></citation>
</ref>
</ref-list> 
</back>
</article>