Sec. Public Mental Health
Volume 10 - 2019 | https://doi.org/10.3389/fpsyt.2019.01017
From Digital Mental Health Interventions to Digital “Addiction”: Where the Two Fields Converge
- 1Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, United States
- 2Mental Health and Addiction Research Group, Department of Health Sciences & Hull York Medical School, University of York, York, United Kingdom
Scientific literature from the last two decades indicates that, when it comes to mental health, technology is presented either as panacea or anathema. This is partly because researchers, too frequently, have planted themselves either in the field of digital mental health interventions (variably called “telepsychiatry”, “digital therapeutics”, “computerized therapy”, etc.), or in that of the problems arising from technology, with little cross-fertilization between the two. Yet, a closer look at the two fields reveals unifying themes that underpin both the advantages and dangers of technology in mental health. This article discusses five such themes. First, the breakneck pace of technology evolution keeps digital mental health interventions updated and creates more potentially problematic activities, leaving researchers perennially behind, so new technologies become outdated by the time they are studied. Second, the freedom of creating and using technologies in a regulatory vacuum has led to proliferation and choice, but also to a Wild-West online environment. Third, technology is an open window to access information, but also to compromise privacy, with serious implications for online psychology and digital mental health interventions. Fourth, weak bonds characterize online interactions, including those between therapists and patients, contributing to high attrition from digital interventions. Finally, economic analyses of technology-enabled care may show good value for money, but often fail to capture the true costs of technology, a fact that is mirrored in other online activities. The article ends with a call for collaborations between two interrelated fields that have been—till now—mutually insular.
In Aesop's ancient fable “Man and Satyr1”, the satyr saw the man blow on his hands on a cold winter's day and asked why. The man responded: “so that I can warm them up; can't you see how cold it is?” When they sat down to eat, the man cut a piece of his roast and blew on it before he ate it. The satyr asked why. The man responded: “so that it cools down; can't you see how hot it is?” The satyr, indignantly, walked away. “That's enough,” he said. “I can no longer be your friend since I see you blowing hot and cold with the same breath.” The allegory helps illustrate how digital technology in mental health can be therapeutic and problematic at the same time. This dissonance can cause confusion and mistrust, which leads clinicians, patients and the general public to either disengage from digital mental health interventions, or to underestimate the problems and risks associated with extensive technology use.
As a case in point, a US telemedicine company recently announced positive results in a study of an action video game designed to treat attention deficit and hyperactivity disorder (ADHD), with plans to market it as a prescription digital therapy for ADHD (1). Meanwhile, several studies, including systematic reviews of comorbidity studies [(e.g., (2–4)], have shown a strong link between ADHD and Internet Gaming Disorder (IGD), suggesting that online gaming and other online activities may actually be a causal or contributing factor to developing ADHD. Similarly, a meta-analytic study showed that digital therapies for children and young people's mental health were promising (5), but also fueled anxieties in parents who could not reconcile encouraging their children to use the internet and games for treatment when they were constantly pressing them to reduce screen use, due to concern about the negative impact of digital media on children's emotional and social wellbeing (6, 7).
As this data illustrates, the field of digital mental health interventions is increasingly in collision with that of digital mental health problems and risks. On the one hand, empirical findings into the adverse effects of digital technologies can dissuade adoption of digital mental health interventions such as “serious games”, out of fear that they may fuel a “gaming disorder” (8) or have other negative psychological impact, such as incite violence as result of possible violent content (9, 10), invite bullying by facilitating access to users (11, 12), or encourage social withdrawal (13). On the other hand, overzealous evidence about the efficacy and cost-effectiveness of digital interventions is rarely accompanied by reasonable caution about the uncertainty of positive outcomes with technology, or its risks–financial, legal, ethical and therapeutic—until a brick wall is hit when trying to transfer digital interventions from scholarly ivory towers to real world clinics.
Between the field of digital mental health interventions and that of the risks and adverse effects of digital technologies, rarely has one research group cited or built upon the work of the other (14). Yet reciprocal recognition and collaboration could prove very beneficial: research from both camps over the last two decades suggests shared observations and challenges. In this article, we aim to highlight common themes that have defined research into both the benefits and problems of technology in mental health, and to make recommendations for cross-collaborations between two interrelated fields of research that have traditionally been mutually insular. Rather than review the benefits and harms of technology when it comes to mental health as revealed in research—a very different, much more ambitious endeavor— we instead will focus on how research into both fields has revealed common issues and challenges.
Speed of Technology: Rapidly Evolving But Easily Outdated
In 1965, Gordon Moore, an engineer and the cofounder of Intel, predicted that the number of transistors that can be fitted on a microchip would double every two years (15). What became known as “Moore's Law” has held up surprisingly well, as has its corollary: that computer processing speed would grow at a similar rate. Digital technology has evolved exponentially, its breakneck speed far outpacing that of mental health research into it.
Slowness is inherent to a robust research process: an investigation question has to be identified, a protocol conceived and approved, participants recruited, an intervention tested, data analyzed, an article written, and peer review performed, before a manuscript can finally be accepted and published. If investigators were testing an antidepressant, there would be little reason to fear the obsoleteness by publication time of the hard earned results. However, when investigating the positive health uses of technology or its negative psychological effects, there is a “moving target” aspect to the process that makes it so that the platform being investigated risks becoming less relevant by the study's end, as users move to ever newer, faster and more engaging platforms. As such, digital technology is, almost by definition, always ahead of the science investigating it, and the research community is perennially “playing catch up”, whether it is weighing in on the merits of the latest digital therapeutic tool being marketed or on the ills of the most recent all-consuming video-game overtaking culture. One result is that clinical researchers often feel ill equipped to address the latest technologies, leaving it to investors, startup executives and engineers to take the lead and steer the field.
Proliferation of Technology: Freedom But Also a “Wild-West”
The proliferation of digital products means that developers have the freedom to create and distribute diverse technologies, whereas consumers have the freedom to choose and use different products. This “laissez-faire” approach may have been the cornerstone of the classical economic liberalism that propelled innovation and trade in 18th c. Europe. But, in the 21st c. World Wide Web, societies have a duty of care towards technology's end users, especially if these users turn to technology because they seek psychological, social, and emotional support.
Legislative and regulatory bodies have struggled to understand, and respond to, this proliferation of technology. Delayed oversight and lack of consistent and comprehensive protective frameworks have led to negative consequences. In the US, for much of the internet's life, online communications were governed by the 1996 Communications Decency Act, Section 230, which essentially immunized sites from liability for problematic behavior by their users, including bullying, misinformation and sexual predation (16). While Europe has often led the way in internet regulation as a means of protecting consumers, significant safety protections such as the Right To Be Forgotten (17) and the General Data Protection Regulation (18) arrived relatively late (2012 and 2018, respectively). This contributed to a “Wild West”-like online environment that nurtured negative personality traits from narcissism (19, 20) to aggression (20), and may have contributed to the radicalization (20–22), terrorism (20–22) and counter-democratic shifts that have been blamed on the internet (20).
Similarly, despite the field of digital mental health interventions being three decades old (23), it is only in 2017 that, in the US, the Food and Drug Administration initiated a dedicated program for the testing and scrutiny of digital health tools and innovations (24). In the UK, the National Health Service (NHS) introduced a digital, data and technology standards framework as recently as late 2018 (25), forming the basis for the March 2019 Evidence Standards Framework for Digital Health Technologies by the National Institute for Health and Care Excellence (NICE) (25). The legal and regulatory vacuum in which digital mental health interventions have evolved has had deleterious effects on the field's credibility and, one fears, on public health. This is in part because it has allowed to go largely unchallenged false advertising and unfounded scientific claims of the therapeutic value of some “health products” (26). Everyone stands to benefit from comprehensive regulations and legislations that offer broader protection against a digital “Wild West”, without compromising the freedom of using online media and digital mental health interventions.
Visibility Through Technology: Improved Access But Compromised Privacy
Technology is an open window through which people can access and distribute information, share experiences and communicate with each other. In the past, clinical knowledge was the privilege of a few professionals who communicated it to their patients during face-to-face meetings, if locally available. Technology has improved patient access to specialist knowledge via standardized therapy programs, which allow users to learn therapy skills and manage their own care, and via remote digital mental health platforms, which allow visits with geographically removed professionals. Technology has also enabled users with mental health problems to share their stories and form peer support networks. Further, it has allowed information exchange between professionals via digital media in a way that expedites risk assessment and peer consultations and ensures continuity of care. The open window afforded by technology has come with a price, though: the threat of compromised privacy.
Issues of privacy are at the core of how research into both digital mental health interventions and technology-related problems has evolved. With all too frequent news of hacks into supposedly secure networks, there is growing distrust of digital systems as repositories of health information (27). Besides concerns around electronic medical records, this has meant hesitation on the part of some providers and patients to adopt digital platforms whose confidentiality cannot be guaranteed, even when efficacy data suggests benefit. Legislative actions to try to protect health information in the digital medical record and on telemedicine platforms have, again, lagged behind the increasingly sophisticated modes of violation. As such, regulations, such as the US Health Insurance Portability and Accountability Act (HIPAA) (1996) (28), the Health Information Technology for Economic and Clinical Health Act (HITECH Act) (2009) (29) and the Omnibus Rule (2013) (30), as well as the UK's Data Protection Act (2018) (31), have only been partially successful.
Beyond compromising health information, the post-privacy age ushered in by internet-related technologies has had important effects on psychology. Pre-internet psychological literature delineated several privacy components (32, 33), all of which would seem impacted by our heavily technology-reliant lifestyle (20, 34). Components that, together, constitute privacy include reserve, or the ability to control disclosures; isolation, or the use of geographic distance to separate oneself from others; solitude, or the freedom to place oneself where one cannot be seen or heard; selective intimacy, or the ability to be with an individual or group to the exclusion of others; and anonymity. These privacy components have been shown to mediate psychological functions that are crucial to wellbeing, including contemplation, autonomy, rejuvenation, catharsis, and recovery (33, 34). If the building blocks of privacy are under digital assault, including by facial recognition and Artificial Intelligence (AI) tools now being applied to massive social media databases, then the psychological processes that rely on them may be negatively impacted, with potentially serious consequences (35). As such, privacy violations may be contributing to internet-related psychopathology, just as they threaten the evolution and adoption of digital mental health interventions.
Attraction of Technology: Strong Pull But Loose Ties
In June 2019, there were 4,422,494,622 internet users, 57.3% of the global population (36). There are over 10,000 mental health-related smartphone apps (37), without counting computerized programs, websites, virtual reality systems and wearables. Designers often approach online users as fickle consumers who are on constant lookout for new digital opportunities—easily drawn in, but just as easily distracted away. Much of the research in online “user experience”, for example, focuses on maximizing “dwell time” (the average time a user spends engaged with a site's content), extending “scroll depth” (how far down the page a user gets when reading content), minimizing “bounce rate” (the percentage of users who navigate away from a site after viewing only one page), and decreasing “time between visits” (38). In Internet psychology, this has been blamed for moving the internet in more extreme directions of representation, including radicalization and narcissism, as desperate page owners vie to attract users at all costs (20).
Like the tenuous commitment of internet users to content, the online definition of a social media “friend” or romantic interest (the app-driven evolution from “dating” to “hooking up” [(20, 39, 40)]) speaks to a similarly “shaky” commitment to relationships developed online. More generally, the virtualization of relationships across digital platforms means looser ties to sites and individuals. This seems true across social media and digital mental health delivery platforms.
The ease of “unfriending” an acquaintance on Facebook or blocking someone on Instagram or Twitter may not differ in fundamental ways from the premature termination of therapy with an e-counselor over a digital mental health portal (41). The provider-patient relationship across many digital platforms is often nonexistent or limited, mirroring digital relationships in the broader sense. The thousands of mental health apps do not appear to have led to measurable population-level mental health benefits. Could it be because, for the most part, there is no trusted provider to recommend or guide their use, or to incorporate them within a more traditional delivery model that encourages patient engagement through supportive accountability?
The attrition problem in digital mental health interventions has been borne out by scientific data emanating from well-designed research studies [(e.g., (42, 43)]. Perhaps because of the lack of a visible, knowable interlocutor on the other side of many online exchanges, engagement with digital experiences tends to be superficial and to lack anchoring. There is ample evidence, though, to suggest that some clinician support is better than none when it comes to outcomes with digital interventions (44–48); in fact, the greater the therapist input, the more effective the interventions seem to be (49, 50). Will fully automated AI platforms that simulate human decision-making and adaptability be able to sustain patients' engagement beyond an initial curiosity-driven stage? Until we find out, we need to invest in human support that strengthens ties and engagement—and ultimately improves outcomes—with digital interventions.
Economic Value of Technology: Cost-Effective But Costly
There is an assumption that digital mental health interventions offer “good value for money”. By encouraging patient self-management, allowing remote delivery, enabling a less specialist workforce to deliver complex interventions, and reducing waiting lists, digital mental health interventions would be expected to save clinician time and make clinical work more efficient. This assumption comes with several problems. First, we may not be able to forego the traditional intervention for ethical, clinical or practical reasons; e.g., we cannot prohibit patients from seeing their family doctor in favor of following self-management at home. Second, spending for technology is often frontloaded (e.g., cost of software and hardware), whereas savings or improved outcomes are accrued in the longer term, and payers may not have the money to invest upfront. Third, costs may be incurred in one sector and benefits or savings in another, even if their budgets are not linked (e.g., costs for digital therapies are paid by the health clinic or the user, but savings are accrued in the employment sector in the form of less absenteeism). Fourth, the per-patient treatment cost may decrease, but, due to technology's greater reach, the overall number of people treated may go up, thereby increasing total healthcare costs. In the end, the overall economic incentives to adopt digital therapies may be weak, even if they are proven cost-effective.
It is similarly easy to explain away the overuse of internet-related platforms as a means of enhancing one's productivity and, therefore, living standards. “Multi-tasking”, as allowed by a powerful smartphone or by simultaneously opening windows on one's desktop, can feed the illusion that one has cloned himself or developed an extra pair of hands and can now do the work of more than one individual. However, economists still debate whether a “productivity miracle” has been sustained through the successive waves of internet-related technology evolution (51). It turns out that a lot of what people do online—from mindless surfing to online gaming to catching up on celebrity gossip or social media updates—may not necessarily add to their material wealth or the gross domestic product (51). Any benefits from technology-enabled activity have to also be weighed against the costs of treating the population of distracted or otherwise psychologically affected individuals. Whether assessing online distractibility or technology-delivered treatments, there is more to the cost debate than a surface economic reading might suggest.
Technology blows hot and cold with the same breath when it comes to mental health. Speed, proliferation, visibility, attraction, and economic value are some of the attributes that underlie both its benefits and problems. To our knowledge, no paper has addressed the challenges and themes common to the field of digital mental health interventions and that of the problematic use of digital technology. Research at the intersection of digital technology and mental health has typically focused either on the benefits or the problems, in mutually exclusive fields of enquiry. The resulting literature makes digital technology look like either a panacea or an anathema. The truth, of course, is more complex and more likely to be revealed via a “global”, collaborative approach between researchers in the arena of digital mental health interventions and those exploring the risks and negative consequences of technology. In this paper, we made an attempt to bring closer two disparate areas of scholarship. The fact that, as we discussed, similar forces appear to have partially defined both research fields makes such collaborations particularly promising. Joint efforts would empower the research community to understand the psychological, societal, ethical and economic forces at play—and to suggest solutions.
Both authors contributed to the conceptualization, researching, and writing of the article.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The authors received no external help in conceptualizing, researching, and writing the article.
- ^ Satyr: Ancient Greek creature, half-man and half-goat, associated with hedonistic pursuits and the God Dionysus.
1. Kollins SH, Bower J, Findling RL, Keefe R, Epstein J, Cutler AJ, et al. A multicenter, randomized, active-control registration trial of software treatment for actively reducing severity of ADHD (STARS-ADHD) to assess the efficacy and safety of a novel, home-based, digital treatment for pediatric ADHD. J Am Acad Child Adolesc Psychiatry (2018) 57(10):S172. doi: 10.1016/j.jaac.2018.09.128
2. Carli V, Durkee T, Wasserman D, Hadlaczky G, Despalins R, Kramarz E, et al. The association between Pathological Internet Use and comorbid psychopathology: a systematic review. Psychopathology (2013) 46:1–13. doi: 10.1159/000337971
3. González-Bueso V, Santamaría JJ, Fernández D, Merino L, Montero E, Ribas J. Association between Internet Gaming Disorder or pathological video-game use and comorbid psychopathology: a comprehensive review. Int J Environ Res Public Health (2018) 15(4):668. doi: 10.3390/ijerph15040668
4. Párraga JL, Perez BC, Lopez-Martin S, Albert J, Fernandez-Mayoralas DM, Fernandez-Perrone AL, de Domingo AJ, et al. Attention-deficit/hyperactivity disorder and lifestyle habits in children and adolescents. Actas Esp Psiquiatr (2019) 47(4):158–64.
5. Hollis C, Falconer CJ, Martin JL, Whittington C, Stockton S, Glazebrook C, et al. Annual Research Review: Digital health interventions for children and young people with mental health problems - a systematic and meta-review. J Child Psychol Psychiatry (2017) 58(4):474–503. doi: 10.1111/jcpp.12663
6. DeCamp W. Parental influence on youth violent video game use. Soc Sci Res (2019) 82:195–203. doi: 10.1016/j.ssresearch.2019.04.013
7. Álvarez-García D, Núñez JC, González-Castro P, Rodríguez C, Cerezo R. The effect of parental control on cyber-victimization in adolescence: the mediating role of impulsivity and high-risk behaviors. Front Psychol (2019) 10:1159. doi: 10.3389/fpsyg.2019.01159
8. Paulus FW, Ohmann S, von Gontard A, Popow C. Internet gaming disorder in children and adolescents: a systematic review. Dev Med Child Neurol (2018) 60(7):645–59. doi: 10.1111/dmcn.13754
9. Anderson CA, Bushman BJ, Bartholow BD, Cantor J, Christakis D, Coyne SM, et al. Screen Violence and Youth Behavior. Pediatrics (2017) 1402:S142–7. doi: 10.1542/peds.2016-1758T
10. Ybarra ML, Diener-West M, Markow D, Leaf PJ, Hamburger M, Boxer P. Linkages between internet and other media violence with seriously violent behavior by youth. Pediatrics (2008) 122(5):929–37. doi: 10.1542/peds.2007-3377
11. Aboujaoude E, Savage MW, Starcevic V, Salame WO. Cyberbullying: review of an old problem gone viral. J Adolesc Health (2015) 57(1):10–8. doi: 10.1016/j.jadohealth.2015.04.011
12. Kennedy RS. Bullying trends in the United States: a meta-regression. Trauma Violence Abuse (2019) 1:1524838019888555. doi: 10.1177/1524838019888555
13. Peterka-Bonetta J, Sindermann C, Elhai JD, Montag C. Personality associations with smartphone and Internet Use Disorder: a comparison study including links to impulsivity and social anxiety. Front Public Health (2019) 11;7:127. doi: 10.3389/fpubh.2019.00127
14. Aboujaoude E, Starcevic V. Introduction: www (dot) mental health. In: Aboujaoude E, Starcevic V, editors. Mental Health in the Digital Age: Grave Dangers, Great Promise. Oxford University Press: New York (2015).
15. Dovey D. (2018). The future of technology is uncertain as Moore"s Law comes to an end. Newsweek. Available at: https://www.newsweek.com/future-technology-uncertain-moores-law-comes-end-807546 [Accessed December 30, 2019]
16. Federal Communications Commission. Telecommunications act of 1996. Available from: https://www.fcc.gov/general/telecommunications-act-1996 [Accessed August 15, 2019].
17. Travis A, Arthur C. (2014). EU court backs right to be forgotten: Google must amend results on request. The Guardian. Available at: https://www.theguardian.com/technology/2014/may/13/right-to-be-forgotten-eu-court-google-search-results [Accessed December 30, 2019]
18. European Commission. (2018). A new era for data protection in the EU. Europa. Available at: https://ec.europa.eu/info/sites/info/files/data-protection-factsheet-changes_en.pdf [Accessed December 30, 2019]
19. Buffardi LE, Campbell WK. Narcissism and social networking Web sites. Pers Soc Psychol Bull (2008) 34(10):1303–14. doi: 10.1177/0146167208320061
20. Aboujaoude E. Virtually you: the dangerous powers of the e-personality. W.W. Norton: New York (2011).
21. Jones E. The reception of broadcast terrorism: recruitment and radicalisation. Int Rev Psychiatry (2017) 29(4):320–6. doi: 10.1080/09540261.2017.1343529
22. Bail CA, Merhout F, Ding P. Using Internet search data to examine the relationship between anti-Muslim and pro-ISIS sentiment in U.S. counties. Sci Adv (2018)(6), eaao5948. doi: 10.1126/sciadv.aao5948
23. Aboujaoude E. Three decades of telemedicine in obsessive-compulsive disorder: a review across platforms. JOCRD (2017) 14:65–70. doi: 10.1016/j.jocrd.2017.06.003
24. Food and Drug Administration. (2017). Digital health innovation action plan. Digital Health Program. Available at: https://www.fda.gov/media/106331/download [Accessed December 30, 2019]
25. NHS. (2018). BETA - NHS digital, data and technology standards framework. NHS digital. Available at: https://digital.nhs.uk/about-nhs-digital/our-work/nhs-digital-data-and-technology-standards/framework [Accessed December 30, 2019]
26. NICE. Evidence Standards Framework for Digital Health Technologies. Available from: https://www.nice.org.uk/Media/Default/About/what-we-do/our-programmes/evidence-standards-framework/digital-evidence-standards-framework.pdf [Accessed 15 Aug 2019]
27. Federal Trade Commission. (2018). Lumosity to pay $2 million to settle FTC deceptive advertising charges for its “brain training” program. Bureau of Consumer Protection. Available at: https://www.ftc.gov/news-events/press-releases/2016/01/lumosity-pay-2-million-settle-ftc-deceptive-advertising-charges [Accessed August 15, 2019]
28. Healthcare IT News staff. (2018). The biggest health IT breaches of 2018 (so far). Healthcare IT News. Available at: https://www.healthcareitnews.com/projects/biggest-healthcare-data-breaches-2018-so-far [Accessed August 15, 2019]
29. US Department of Health and Human Services. (1999). Summary of the HIPAA privacy rule. HIPAA Compliance Assistance. Available at: https://www.hhs.gov/sites/default/files/privacysummary.pdf [Accessed August 15, 2019]
30. US Department of Health and Human Services. (2009). HITECH Act. Federal Register. Available at: https://www.hhs.gov/sites/default/files/ocr/privacy/hipaa/administrative/enforcementrule/enfifr.pdf [Accessed August 15, 2019]
31. US Department of Health and Human Services. (2013). Omnibus HIPAA rulemaking. HIPAA Home. Available at: https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/combined-regulation-text/omnibus-hipaa-rulemaking/index.html [Accessed August 15, 2019]
32. The National Archives. (2018). The data protection act. UK Public General Acts. Available at: http://www.legislation.gov.uk/ukpga/2018/12/contents/enacted [Accessed August 15, 2019]
33. Pedersen DM. Dimensions of privacy. Percept Mot Skills (1979) 48:1291–7.
34. Pedersen DM. Psychological functions of privacy. J Environ Psychol (1997) 17:147–56. doi: 10.1006/jevp.1997.0049
35. Aboujaoude E. Protecting privacy to protect mental health: the new ethical imperative. J Med Ethics (2019). doi: 10.1136/medethics-2018-105313
36. Miniwatts Marketing Group. (2019). Internet users in the world by regions – 2019 June – updated. Internet World Stats. Available at: https://www.internetworldstats.com/stats.htm [Accessed August 16, 2019]
37. Torous J, Roberts LW. Needed innovation in digital health and smartphone applications for mental health: transparency and trust. JAMA Psychiatry (2017) 74(5):437–8. doi: 10.1001/jamapsychiatry.2017.0262
38. Gaudette E. (2018). 2018"s content metric of the year: time. Contently. Available at: https://contently.com/2018/12/19/2018-content-metric-time [Accessed August 15, 2019]
39. Abbasi IS, Alghamdi NG. The pursuit of romantic alternatives online: social media friends as potential alternatives. J Sex Marital Ther. (2018) Jan 244(1):16–28. doi: 10.1080/0092623X.2017.1308450
40. Rosenbaum MS, Daunt KL, Jiang A. Craigslist exposed: the Internet-mediated hookup. J Homosex (2013) 60(4):505–31. doi: 10.1080/00918369.2013.760305
41. Aboujaoude E. Telemental health: why the revolution has not arrived. World Psychiatry (2018) 17(3):277–8. doi: 10.1002/wps.20551
42. Mohr DC, Cuijpers P, Lehman K. Supportive accountability: a model for providing human support to enhance adherence to eHealth interventions. J Med Internet Res (2011) 13(1):e30. doi: 10.2196/jmir.1602
43. Melville KM, Casey LM, Kavanagh DJ. Dropout from internet-based treatment for psychological disorders. Brit J Clin Psychol (2010) 49:455–71. doi: 10.1348/014466509X472138
44. Spek V, Cuijpers P, Nyklícek I, Riper H, Keyzer J, Pop V. Internet-based cognitive behavior therapy for symptoms of depression and anxiety: a meta-analysis. Psychol Med (2007) 37:319–28. doi: 10.1017/S0033291706008944
46. Gilbody S, Brabyn S, Lovell K, Kessler D, Devlin T, Smith L, et al. Telephone-supported computerised cognitive–behavioural therapy: REEACT-2 large-scale pragmatic randomised controlled trial. Brit J Psychiat (2017) 210(05):362–7. doi: 10.1192/bjp.bp.116.192435
47. Cuijpers P, Donker T, van Straten A, Li J, Andersson G. Is guided self-help as effective as face-to-face psychotherapy for depression and anxiety disorders? A systematic review meta-analysis comparative outcome studies. Psychol Med (2010) 40:1943–57. doi: 10.1017/S0033291710000772
48. Cuijpers P, Donker T, Johansson R, Mohr DC, van Straten A, Andersson G. Self-guided psychological treatment for depressive symptoms: a meta-analysis. PLOS ONE (2011) 6(6):e21274. doi: 10.1371/journal.pone.0021274
49. Cuijpers P, Marks I, van Straten A, Cavanagh K, Gega L, Andersson G. Computer-aided psychotherapy for anxiety disorders: A meta-analytic review. Cogn Behav Ther (2009) 38(2):66–82. doi: 10.1080/16506070802694776
50. Johansson R, Andersson G. Internet-based psychological treatments for depression. Expert Rev Neurother (2012) 12(7):861–9. doi: 10.1586/ern.12.63
51. Cassidy J. (2013). What happened to the internet productivity miracle? The New Yorker. Available at: https://www.newyorker.com/news/john-cassidy/what-happened-to-the-internet-productivity-miracle [Accessed August 15, 2019]
Keywords: telepsychiatry, telemental health, telemedicine, problematic internet use, internet addiction, privacy
Citation: Aboujaoude E and Gega L (2020) From Digital Mental Health Interventions to Digital “Addiction”: Where the Two Fields Converge. Front. Psychiatry 10:1017. doi: 10.3389/fpsyt.2019.01017
Received: 02 September 2019; Accepted: 23 December 2019;
Published: 21 January 2020.
Edited by:Jochen Mutschler, Private Clinic Meiringen, Switzerland
Reviewed by:Agnes Von Wyl, Zurich University of Applied Sciences, Switzerland
Roser Granero, Autonomous University of Barcelona, Spain
Copyright © 2020 Aboujaoude and Gega. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Elias Aboujaoude, firstname.lastname@example.org