Skip to main content

ORIGINAL RESEARCH article

Front. Med., 27 October 2020
Sec. Dermatology
Volume 7 - 2020 | https://doi.org/10.3389/fmed.2020.585792

Part I: Accuracy of Teledermatology in Inflammatory Dermatoses

  • Department of Telemedicine, Hospital Israelita Albert Einstein, São Paulo, Brazil

Teledermatology is assuming a progressively greater role as a healthcare delivery method, especially now, during this pandemic time. It is important to know how accurate this tool is for different skin diseases. Most of the studies have focused on skin neoplasms or general dermatology. Studies based on a large number of inflammatory dermatoses have not yet been performed. Such knowledge can help dermatologists to decide whether endorsing this method or not. Our objective was to determine the accuracy of teledermatology in inflammatory dermatoses in a robust number of cases. A retrospective cohort study was conducted in São Paulo, Brazil, from July 2017–18, where a store-and-forward Teledermatology project was implemented under primary-care attention to triage surgical, more complex, or severe dermatoses. A total of 30,976 patients presenting 55,012 lesions took part in the project. Thirteen participating teledermatologists had three options to refer the patients: directly to biopsy, to the in-person dermatologist or back to the general physician with most probable diagnosis and management. In the group referred to the in-person dermatologist, we looked for the 20 most frequent International Classification of Diseases and Related Health Problems- 10th revision (ICD-10) of inflammatory dermatoses, which resulted in 739 patients and 739 lesions. As patients had been triaged by teledermatology previously, we were able to compare ICD-10 codes filled both by teledermatogists and by in-person dermatologists. The proportion of complete, partial, and no agreement rates between the in-person dermatologist's and the teledermatologist's diagnoses was used for accuracy. We also calculated Cohen's kappa, a statistical measure of inter-rater agreement, for complete agreement. The mean complete agreement rate for all twenty dermatoses was 78% (31–100%) and kappa = 0.743; partial agreement 8%; and no agreement 14%, presenting variability according to the disease. Our study showed that teledermatology for inflammatory dermatoses has a high accuracy. This result reassures that it can be a proper option for patient care.

Introduction

Telemedicine, especially in this pandemic moment, is of great value for delivering healthcare. It has the potential to improve access to subspecialty expertise, reduce healthcare costs, and improve the overall quality of care. Dermatology is particularly suitable for this care system. The three main teledermatology delivery platforms are: synchronous (RT: real-time teledermatology), asynchronous (SF-TD), and hybrid (both synchronous and asynchronous forms). Synchronous teledermatology employs live video conferencing between the patient and the teledermatologist. Asynchronous teledermatology is a method whereby clinical or dermoscopy dermatologic images are obtained, sent to the responding dermatologist, who can review them at later time. Although it provides high-resolution dermatologic images and enables an efficient practice that can be performed across time zones, this modality is limited by the ability of the teledermatologist to obtain additional clinical history while evaluating the case (1).

Rates of diagnostic accuracy by teledermatology vary from study to study; the majority have found rates to be in the range of 75–80%, comparable to those with in-person care (1). Nevertheless, most of the studies were focused on skin neoplasms, especially skin cancer and pigmented lesions (25), or on general dermatology (610). A recent systematic review concluded that robust implementation studies of teledermatology are needed, with attention to reducing risk of bias when assessing diagnostic accuracy (2). For this reason, we performed a study with the aim of determining the accuracy of teledermatology for inflammatory dermatoses in a robust number of cases, assessing the agreement rate between the in-person dermatologist's and the teledermatologist's diagnoses.

Materials and Methods

This was a retrospective cohort study designed to assess concordance between diagnoses made by in-person dermatologists and teledermatologists, approved by the Ethics Committee of Hospital Israelita Albert Einstein (CAAE: 97126618.6.0000.0071). We analyzed the reports of 30,976 patients included in a teledermatology triage project conducted in the city of São Paulo, Brazil, from July 2017 to July 2018.

Teledermatology Triage Project

Since there was a long patients' waiting list for an appointment with a dermatologist in the public health service, the aim of the teledermatology project was to triage the patients, in such a way that the severe, more complex, or surgical cases would be prioritized for biopsy procedure and in-person dermatologists, and the mild cases would be managed in the primary care attention along with the general physician (GP). Briefly, there were 57,832 patients under public primary-care attention who were on a waiting list for an appointment with a dermatologist, after being referred by the primary care physician. All of them were consecutively phoned to go to one of the three public municipal hospitals. Once there, their demographic data, a short clinical history and photographs of their skin lesions were taken by a nurse or a health technician, utilizing a cell phone app created for this purpose. Thirty thousand nine hundred seventy-six individuals responded to the call and attended the project. All their data and images were securely uploaded to a platform accessed by thirteen teledermatologists from Hospital Albert Einstein, authorized to do so through login and password at a later time (store-and-forward telemedicine). The thirteen dermatologists were Brazilian Board-certified to decrease the chance of diagnostic error. Once logged, the teledermatologists evaluated the cases, and they had to elaborate the most probable diagnosis and management. Next, they had to decide among three options to refer the patients: (1) directly to biopsy (with subsequent follow-up with an in-person dermatologist), (2) to an in-person dermatologist and (3) back to the general physician who had referred him/her to the dermatologist in the first place. Figure 1 shows the frequency of patients included, photographed lesions, and referrals made by the teledermatologists, along with the flow used to select the reports to assess the accuracy. Teledermatologists were Hospital Israelita Albert Einstein employees (a private institution), and in-person dermatologists were public health service employees.

FIGURE 1
www.frontiersin.org

Figure 1. Frequency of patients included, photographed lesions and referrals made by the teledermatologists, along with the flow used to select the reports to assess the diagnosis accuracy.

Study Design

We selected only the group referred to in-person dermatologist (12,874). Then, we assessed the reports that had the International Code of Diseases diagnoses filled by the in-person dermatologists (2,290). Next, we separated the reports filled with ICD-10 codes of inflammatory dermatoses (1,227). Afterwards, we looked for the 20 most frequent dermatoses to include in our study (1,143). As the last step, we eliminated duplicity of reports from the same patient, totalizing 739 reports from 739 patients (Figure 1).

We classified the rate of agreement as: (1) complete agreement when ICD-10 code used in both reports were the same, (2) partial agreement when ICD-10 used in both reports were different, but in the same group of disease (Table 1), posing as a probable differential diagnosis, and (3) no agreement when both reports did not fill the previous two conditions. As many inflammatory diagnoses are based on clinical diagnosis, we considered in-person dermatologist diagnosis as our gold standard diagnosis. For this reason, the rate of agreement between in-person dermatologists and teledermatologists was stated in this research as accuracy.

TABLE 1
www.frontiersin.org

Table 1. Group of skin disorders and respective dermatoses present in this study considered as partial agreement between in-person dermatologists' and teledermatogists' diagnosis.

Statistical Analysis

Rates of concordance were expressed using percentages and Cohen's kappa coefficient, which was used to compare between groups of inter-rater observers (Graph Pad Prism 6.0). The guidelines first created by Landis and Koch (11) used to characterize kappa values are as follows: kappa < 0: no agreement, 0.00–0.20: slight agreement, 0.21–0.40: fair agreement, 0.41–0.60: moderate agreement, 0.61–0.8: substantial agreement, and 0.81–1.00: almost perfect agreement.

Results

Table 2 shows the 26 most frequent inflammatory dermatoses diagnosed by teledermatology according to number of patients, lesions, and sex distribution. This constitutes 78% (24,210/30,976) of the total number of patients and 50% (27,519/55,012) of the lesions diagnosed in the overall teledermatology project. The female and male participation was 70 and 30%, respectively, although the female population accounts for 52.6% in the city of São Paulo (12). The mean number of inflammatory dermatosis lesion per patient was 1.1.

TABLE 2
www.frontiersin.org

Table 2. Most frequent inflammatory dermatoses diagnosed by teledermatologists according to number of patients, sex distribution, and number of photographed lesions.

Table 3 assesses the 20 most frequent inflammatory dermatoses diagnosed by in-person dermatologists of which we were able to recover the ICD-10 codes, along with the accuracy of teledermatology found in our study. The mean frequency of complete agreement was 78% for all 20 dermatoses tested (573/739) and its kappa coefficient was 0.743, which is considered a substantial agreement. Xerosis had the lowest rate (31%; kappa = 0.173) and psoriasis and focal hyperhidrosis showed the greatest (100%; kappa = 1.00). A partial agreement was verified in 8% of all cases (60/739), ranging from 0% (dermatophytosis, atopic dermatosis, molluscum, psoriasis, pityriasis versicolor, and focal hyperhidrosis) to 46% (xerosis). No agreement was found in 14% (106/739); psoriasis and focal hyperhidrosis with the lowest rate (0%) and pityriasis versicolor with the highest (44%).

TABLE 3
www.frontiersin.org

Table 3. Most frequent inflammatory dermatosis diagnosed by in-person dermatologists and agreement with teledermatology diagnoses.

Partial agreement frequency and the description of each inflammatory dermatoses are shown in Table 4. Post-inflammatory hyperpigmentation reached the highest number, with 16 cases, all of them with the same diagnosis. On the other hand, seborrheic dermatitis had 13 cases of partial agreement with 11 different diagnoses.

TABLE 4
www.frontiersin.org

Table 4. Differences between teledermatogists' and in-person dermatologists' diagnosis in cases considered partial agreement for inflammatory dermatosis.

Discussion

The differences in frequency for the inflammatory dermatoses between Tables 2 and 3 are due to the fact that although one disease could be very frequently diagnosed by the teledermatologists (Table 2), but it could not be referred as frequently to the in-person dermatologists (Table 3). That, in fact, was the reason for solar lentigo, leukoderma, pityriasis alba, dyshidrosis, lichen simplex chronicus and stasis dermatitis to be left out of the second table. These dermatoses were mostly referred back to the GP along with the diagnosis/management, and they were present only in few cases for the in-person dermatologist, not included in the 20 most frequent ones as the inclusion criteria.

This fact is also very relevant when discussing accuracy. Since the aim of the teledermatology triage project was to prioritize the severe, more complex, or surgical cases for biopsy and in-person dermatologists and to manage the mild cases under the primary-care attention along with the GP, the inflammatory disorders diagnosed by in-person dermatologists in Table 2 have a potential bias of being the most challenging cases. Typical or “regular” inflammatory dermatoses were most likely diagnosed and referred to the GP. Therefore, the rate of agreement found would be probably even higher if the more typical cases were analyzed. The results of our study showed a high agreement rate between diagnoses made by teledermatologists and in-person dermatologists, corroborating the idea that teledermatology is accurate for inflammatory dermatoses. If we add the total (78%) and partial (8%) agreement rates, we will achieve 86% (kappa = 0.846), which is considered an almost perfect agreement, and only 14% of no agreement. This would be remarkable even if we were not discussing the potential bias above. According to literature, SF-TD had an accuracy in general of 64 and 65% in medium size studies (n = 109 and 163, respectively) and 90 and 95% in small size studies (n = 50 and 10, respectively) (6). Another article found an agreement of 90% in 120 cases of SF-TD consultations (7). Lim et al. reported 88% agreement in 53 cases (8). Weingast et al. evaluated 263 patients and found accuracy of 80% (9). O'Connor et al. assessed 40 patients and encountered accuracy of 83% (10).

There were different accuracies among the 20 most frequent dermatoses diagnosed by in-person dermatologists in our study. Eight diseases reached very high complete agreement rate, 90% or above: acne, atopic dermatitis, molluscum contagiosum, vitiligo, psoriasis, telogen effluvium, alopecia areata, and focal hyperhidrosis. Six disorders showed a good total agreement rate, 70–89%: contact dermatitis, androgenetic alopecia, chloasma, rosacea, nail disorders, and urticaria. Six inflammatory dermatoses were less accurate (<60% of total agreement): dermatophytosis, post-inflammatory hyperpigmentation, seborrheic dermatitis, pityriasis versicolor, xerosis, and nummular dermatitis. What were those diseases mistaken for? In order to verify that, we checked if they could be classified as possible differential diagnosis, which we considered to be a partial agreement.

Examining the six diseases with total agreement ranging from 70–89% and considering the partial agreement rate, three of them would have a considerable change. Contact dermatitis would increase from 77 to 87%, due to five cases that were in fact four atopic dermatitis and one lichen simplex chronicus. Androgenetic alopecia (AGA) would increase from 78 to 94%, due to four alopecia areata, three telogen effluvium and one cicatricial alopecia. Chloasma would also raise from 89 to 94.5% accuracy if we included the two cases of post-inflammatory hyperpigmentation in Table 4.

Most interestingly, anyhow, is to look for the diseases showing the least accuracy. Post-inflammatory hyperpigmentation (PIH) should have a separate interpretation once all 16 cases of partial agreement had the same diagnosis: chloasma. One hypothesis is a typing error, because their ICD-10 codes are almost the same (L81.0 and L81.1). Another one is that dermatologists misuse chloasma ICD-10 code for PIH, since chloasma is a very frequent disease and its ICD-10 code may be already known by heart while PIH code would not. Nummular dermatitis and xerosis would at least double the accuracy (37.5–75% and 31–77%, respectively) if we considered the partial agreement diagnosis. Seborrheic dermatitis (SD) was the disease with the highest number of differential diagnosis (13) and considering the partial agreement rate, accuracy would go from 34 to 79%. On the scalp, SD was diagnosed as AGA and perifolicullitis capitis, which could even occur simultaneously, and on the skin, could be confused with many diseases such as psoriasis, eczemas, lupus erythematous, dermatophytosis, rosacea, and actinic keratosis. Dermatophytosis and pityriasis versicolor were the least accurate diagnosis when total and partial agreement were considered, 63 and 56%, respectively. This may be due to some limitation in assessing the lesions through teledermatology or to a great variety of possible differential diagnoses. Again, in our study, the fact that the most typical cases were meant to be treated by teledermatology and not sent to in-person dermatologists could have played an important role in these two dermatoses.

Although this was a retrospective study and much data was missing, we believe this was one of the studies with the largest number of inflammatory diseases included in the literature. The study was performed in two centers and different dermatologists performed the tele and in-person examinations. This is beneficial, in a way that we assessed the agreement between different examiners, but it also may have some bias, since the technical skills in the two groups may be different.

Our study in a large number of patients presenting the most common inflammatory dermatoses showed that the mean accuracy of teledermatology was high, varying according to the disease. This result reassures that store-and-forward teledermatology is as proper option for patient care.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by Ethics Committee of Hospital Israelita Albert Einstein (CAAE: 97126618.6.0000.0071). Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.

Author Contributions

MG-B, RS, and EC were responsible for the study design. MG-B was responsible for writing the article. MG-B and RS were responsible for data analyzes. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

1. Lee JJ, English JC. Teledermatology: a review and update. Am J Clin Dermatol. (2018) 19:253–60. doi: 10.1007/s40257-017-0317-6

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Finnane A, Dallest K, Janda M, Soyer HP. Teledermatology for the diagnosis and management of skin cancer: a systematic review. JAMA Dermatol. (2017) 153:319–27. doi: 10.1001/jamadermatol.2016.4361

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Warshaw EM, Lederle FA, Grill JP, Gravely AA, Bangerter AK, Fortier LA, et al. Accuracy of teledermatology for pigmented neoplasms. J Am Acad Dermatol. (2009) 61:753–65. doi: 10.1016/j.jaad.2009.04.032

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Chuchu N, Dinnes J, Takwoingi Y, Matin RN, Bayliss SE, Davenport C, et al. Teledermatology for diagnosing skin cancer in adults. Cochrane Database Syst Rev. (2018) 12:CD013193. doi: 10.1002/14651858.CD013193

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Markun S, Scherz N, Rosemann T, Tandjung R, Braun RP. Mobile teledermatology for skin cancer screening: a diagnostic accuracy study. Medicine. (2017) 96:e6278. doi: 10.1097/MD.0000000000006278

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Warshaw EM, Hillman YJ, Greer NL, Hagel EM, MacDonald R, Rutks IR, et al. Teledermatology for diagnosis and management of skin conditions: a systematic review. J Am Acad Dermatol. (2011) 64:759–72. doi: 10.1016/j.jaad.2010.08.026

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Lasierra N, Alesanco A, Gilaberte Y, Magallón R, García J. Lessons learned after a three-year store and forward teledermatology experience using internet: strengths and limitations. Int J Med Inform. (2012) 81:332–43. doi: 10.1016/j.ijmedinf.2012.02.008

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Lim AC, Egerton IB, See A, Shumack SP. Accuracy and reliability of store-and-forward teledermatology: preliminary results from the St George Teledermatology Project. Aust J Dermatol. (2001) 42:247–51. doi: 10.1046/j.1440-0960.2001.00529.x

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Weingast J, Scheibböck C, Wurm EM, Ranharter E, Porkert S, Dreiseitl S, et al. A prospective study of mobile phones for dermatology in a clinical setting. J Telemed Telecare. (2013) 19:213–8. doi: 10.1177/1357633x13490890

PubMed Abstract | CrossRef Full Text | Google Scholar

10. O'Connor DM, Jew OS, Perman MJ, Castelo-Soccio LA, Winston FK, McMahon PJ. Diagnostic accuracy of pediatric teledermatology using parent-submitted photographs: a randomized clinical trial. JAMA Dermatol. (2017) 153:1243–8. doi: 10.1001/jamadermatol.2017.4280

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Landis J, Koch G. The measurement of observer agreement for categorical categorical data. Biometrics. (1977) 33:159–74. doi: 10.2307/2529310

PubMed Abstract | CrossRef Full Text | Google Scholar

12. IBGE. Censo 2010. (2010). São Paulo. Available online at: https://cidades.ibge.gov.br/brasil/sp/sao-paulo/pesquisa/23/27652 (accessed June 12, 2020).

Keywords: teledermatology, accuracy, telemedicine, inflammatory dermatoses, telehealth

Citation: Giavina-Bianchi M, Sousa R and Cordioli E (2020) Part I: Accuracy of Teledermatology in Inflammatory Dermatoses. Front. Med. 7:585792. doi: 10.3389/fmed.2020.585792

Received: 21 July 2020; Accepted: 30 September 2020;
Published: 27 October 2020.

Edited by:

Hang Li, Peking University First Hospital, China

Reviewed by:

Irina Khamaganova, Pirogov Russian National Research Medical University, Russia
Artem Vorobyev, University Medical Center Schleswig-Holstein, Germany

Copyright © 2020 Giavina-Bianchi, Sousa and Cordioli. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Mara Giavina-Bianchi, marahgbianchi@gmail.com

Download