- Department of Behavioural Sciences, OsloMet - Oslo Metropolitan University, Oslo, Norway
Introduction: Our society’s reliance on smartphones is a growing phenomenon. Misuse or overuse of smartphones has been associated with negative effects on physical health and psychological functioning, including reduced quality of sleep when used before bedtime. Increasingly, digital users are becoming more aware of how smartphone use impacts their productivity and well-being. Consequently, several digital detox interventions incorporating digital nudges have been introduced to help users reduce their smartphone usage. Digital nudges are freedom-preserving behavior-altering mechanisms that utilize user-interface design.
Methods: In this exploratory study, we examine the effectiveness of a digital nudge—in the form of tracked screen time—as a behavioral intervention to mitigate excessive smartphone use. Secondarily, we explore the potential relationship between screen time and sleep quality. A within-group experimental design, using a randomized controlled trial with a sample of 17 participants, was conducted over 7 days to compare the effectiveness of a tracking-only condition with an active digital nudge condition.
Results: No significant evidence was found to support the impact of the active digital nudge on reducing screen time (primary outcome). There was a direct correlation between screen time reduction and improved sleep quality (secondary outcome), along with a significant effect of reduced frequency of sleep delay in the active nudge condition (p = 0.026).
Discussion: Nonetheless, the findings of this study contribute to our understanding of the mechanisms underlying digital nudges and offer valuable insights into how their effectiveness can be improved and optimized from a behavior-analytic perspective.
Introduction
Since the introduction of the first mobile radio by Bell Laboratories in 1947 in the USA, to the development of cellular phones capable of transmitting and receiving across various radio wave frequencies by the late 1900s, mobile phone technology has evolved rapidly (1). With an expanded range of functionalities, mobile phones have become an integral part of our daily lives, for good and for bad. Smartphone use may offer short-term benefits to adolescents’ well-being, particularly when it involves active social interactions such as instant messaging or social media engagement. These forms of communication can foster and gratify a sense of connection among peers, or with like-minded individuals whom adolescents may not frequently encounter in offline settings (2, 3 as cited in 4). In addition to benefits in strengthening both social communication and work-related cooperation, the use of wearable devices also offers health advantages, including improved physical activity and reduced sedentary behavior (see 5, for a review). Marciano et al. (4) also reported small positive effects on well-being, which may reflect the fulfillment of social needs that adolescents gain from smartphone use, such as emotional expression, enjoyment, and stress-coping tactics. This may suggest a reinforcing loop that contributes to online addictive behaviors due to the emotionally and socially rewarding nature of these activities (6, as cited in 4).
Excessive use of digital devices in general—and mobile phones in particular—has been reported to have detrimental influences on physical functioning and psychological well-being, even before technology addiction was officially recognized as a global issue by the World Health Organization (7, 8). For example, Martínez-Larrínaga et al. (9) examined the relationship between screen time and sleep quality in primary school students and found a significant correlation, which may represent a risk factor extending into adolescence and adulthood.
Psychological well-being is a complex, positive mental state that plays a central role in overall mental health. Tang et al. (10) defined it as “including hedonic (enjoyment, pleasure) and eudaimonic (meaning, fulfillment) happiness, as well as resilience (coping, emotion regulation, healthy problem solving)”. In this study, we focus on the third component of this definition—resilience—as it may be functionally influenced by changes in smartphone use.
According to Peraman and Parasuraman (11), excessive smartphone use is associated with various physical health problems, including a high prevalence of musculoskeletal pain in the wrist, back, or neck—ranging from 8% to 89% (12)—as well as headache, blurred vision, eye strain, and other ocular disorders (13, 14). Beyond these physical effects, the immediate accessibility of digital interactions through smartphones may also lead to a range of sociopsychological issues. These issues include reduced self-confidence, emotional instability, delusions, depression, or nomophobia—a type of anxiety disorder characterized by extreme anxiety or insecurity when disconnected from mobile phone connectivity (15, as cited in 16). Both physical discomfort and psychological issues may contribute to sleep dysregulation at night and daytime dysfunction the next day, as sleep is a fundamental biological process that plays a critical role in regulating mood and emotional functioning (17). This can create a vicious cycle, intensifying symptoms of depression and anxiety (13, 18).
Several studies have investigated these relationships and found that insufficient sleep, poor sleep quality, subjective insomnia, and bedtime procrastination are associated with excessive screen time on mobile phones, especially before bedtime, among adolescents and young adults (17, 19–22). For example, bedtime procrastination has been found to be a mediator between problematic smartphone use and sleep quality among adolescents (aged 13–18) in Turkey (23), and between smartphone addition and sleep quality among university students in China (24), including during the COVID-19 pandemic (25). The context of the pandemic inspired exploratory research on the effects of gender and age in relation to the use of technological devices during teleworking on sleep problems (26). In another study, Figuereido and Kulari (27) examined the role of sleep preferences and daytime chronotype in academic achievement among university students, finding that the morningness type became the most preferred after the pandemic.
Sleep duration may be reduced, as time spent in bed does not equal actual time spent sleeping, and sleep onset latency tends to be longer (28, 29; as cited by 30). The emission of short-wavelength light between 446 and 480 nm (Hannibal et al., as cited in 31), along with electromagnetic fields from mobile device screens, has been reported to delay the melatonin secretion process and disrupt normal circadian rhythms (22, 32; as cited in 21), leading to poor sleep quality and decreased sleep efficiency. Short-wavelength light information is transmitted via the retinohypothalamic tract to the suprachiasmatic nuclei (SCN) of the hypothalamus, which regulate the circadian rhythm and signal the pineal gland, responsible for melatonin production. Late-evening exposure to short-wavelength blue light reduces melatonin levels more strongly than exposure to longer wavelengths or no light 2 h before sleep, suppressing the onset of melatonin secretion and weakening sleep pressure (33–35; as cited in 31).
According to a report by the Norwegian Consumer Council (36), most technology companies, such as Google, Facebook, and Amazon, design their platforms to monetize user data by exploiting their psychological vulnerabilities and maximizing engagement for financial gain. For example, Facebook uses red-colored notifications to trigger a sense of urgency and provides an endless news feed that facilitates an infinite and mindless scrolling experience (37, as cited in Kozyreva, 38). This design approach is based on the Fogg Behavioral Model, which posits that sufficient motivation, necessary competence, and proper triggers are essential for a person to perform a target behavior (39). Fogg (39) emphasized the use of predesigned prompts that align with digital users’ abilities and motivation in order to effectively persuade them toward the desired behavior.
Eyal (40) further introduced the Hook model, which has been widely adopted by technology companies in the development of social media platforms. This model begins with a trigger that prompts an action leading to a reward, thereby initiating a continuous cycle of user investment in anticipation of future rewards. This investment increases the likelihood that users will continue responding to triggers to gain social rewards (e.g., visibility, competence, reputation). The Hook model describes an addictive feedback loop with a high return on investment, grounded in behavior-analytic principles, to build user habits (40). Digital platforms apply these principles to influence user behavior by designing algorithmic systems based on past activity, microtargeting individuals with customized advertisements, and reinforcing both mindless consumption and habitual checking behaviors (40). As a result of these compulsive and addictive behavior patterns, digital users may unknowingly develop a dysfunctional lifestyle.
Problematic Internet use has emerged as a growing social concern, with several studies reporting a negative correlation between it and resilience. In a systematic review and meta-analysis on the topic, Hidalgo-Fuentes et al. (41) found strong evidence of a weak-to-moderate relationship between these variables. One of the studies included in their analysis further revealed the significant negative relationship between problematic Internet use and resilience also extended to affect levels of happiness and dispositional hope (42).
Digital nudging and its behavioral mechanisms
Thaler and Sunstein (43) defined nudging as “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives”. According to Sunstein (44), choice architecture refers to the inherent features of the physical environment and social context in which choice behavior occurs. Architecture inevitably influences behavior, even when it is not deliberately designed to do so, because choices are always made within a structured order. The behavioral economics concept of “choice architecture” is considered a libertarian paternalistic approach. It is paternalistic because nudges are designed to modify behavior with clear intentions and predictable outcomes. It is libertarian because individuals are still deemed the best judges of their own choices. In a nudging intervention, all choice options remain available and unaltered, with no manipulation of incentives or imposition of sanctions on undesired behaviors (43). Hansen (45) stated that a nudge should function independently of any sanctions related to time, trouble, or social exclusion, of economically beneficial choice alternatives, and of “the provision of factual information and rational argumentation”. To qualify as a behavior-modification procedure, a nudge must demonstrably influence behavior by exploiting existing cognitive limitations, biases, routines, habits, and social dynamics that prevent individuals from making rational choices aligned with their own preferences. By modifying the choice environment, nudges aim to either stimulate or counteract the effects of heuristics and biases to influence behavior without coercion (45). From a behavior-analytic perspective, choice architecture is described as “a form of arranging individual behavioral contingencies” (46). Choice architecture, or nudging, seeks to change behavior by influencing the natural and social environment, aligning with the principle of bringing behavior under the control of reinforcement contingencies (47). However, nudging focuses more on predicting behavior than on controlling it, and the choice architecture process does not involve the arrangement of reinforcers or punishers. Therefore, nudging cannot be explained using a three-term contingency, in which a response is evoked by an antecedent event, followed by a consequence that increases or decreases the likelihood of that behavior depending on its contingency (48). Andersen and Dechsling (49) suggested that nudging can be categorized as a behavioral analytic tool under the concept of stimulus control. According to Cooper et al. (50), stimulus control results from a history of differential reinforcement: the frequency of a behavior changes in the presence of specific stimuli due to prior learning experiences with similar stimuli of the same class. Heuristics and autonomous behavior function through the process of stimulus generalization, wherein a reinforcement history associated with specific stimuli generalizes to other stimuli sharing similar characteristics (51, as cited in 49). This mechanism can be applied in nudging techniques such as “default options” and “salience”. When implemented in digital environments through “the use of user-interface design elements to guide people’s behavior” (52, p. 433), they are considered “digital nudges”. Given the dramatic acceleration of technological development, the effectiveness of this subset of nudging can be further enhanced through the integration of algorithms, dark nudges (e.g., those undisclosed to users or ethically questionable), and real-time data (53). For example, previous applications of nudging to promote healthy sleep have include sending personalized “coaching” messages (54) and a implementing a multifaceted approach, such as modifying electronic health record schedules, providing reminders, and offering education, which led to fewer unnecessary overnight interventions (55).
Zimmermann and Sobolev (56) conducted a study to assess the effectiveness of digital nudges in reducing screen time. They used a variety of digital nudges, including setting a default schedule to block all apps on mobile phones (the Downtime feature in iOS Screen Time), sending prompts to limit usage on specific apps (App Limits feature), and displaying daily screen time usage (tracked through the Screen Time app). In their study, Zimmermann and Sobolev (56) combined the Downtime and App Limits features as active nudges, used Grayscale Mode as a passive nudge, and included a tracking-only control condition to examine whether these digital nudges influenced screen time. The passive nudge led to an immediate and substantial reduction in objectively measured screen time compared to the control group. Contrary to participants’ expectations that conscious, self-regulated habit formation would be the most effective approach, the active nudge resulted in a more gradual and smaller reduction in screen time. In contrast, participants in the control group, who only tracked their usage, showed no reduction in screen time.
Caraban et al. (57) demonstrated that, over the past decade, digital nudges have been effective in shaping users’ interactions with technology, as recognized by both academics and practitioners. As defined by Weinmann et al. (52), digital nudges are nudges delivered through digital technologies to influence people’s decisions or behaviors within digital contexts. For example, Bergram et al. (58) incorporated digital nudges into the design of a privacy dialog box aimed at interrupting mindless activity and simplifying the task of reading terms and privacy policies, thereby increasing users’ awareness of data privacy online. In studies specifically targeting screen time or mobile phone usage, Purohit et al. (59) tested the use of digital detox apps to examine whether users could reduce their use of social media. However, many participants were reluctant to adopt this approach, expressing concerns about the risk of personal data leakage through third-party applications that collect behavioral patterns (57). Studies on digital nudges have also addressed potential issues, such as the risk of applying a nudge too strongly, which can create friction and reduce usability. For example, in their vibration-based intervention, researchers at Cornell Tech incorporated nudging principles and negative reinforcement strategies to reduce social media usage (60). Each time a user exceeded their allotted daily Facebook usage, the intervention delivered subtle but repetitive vibrations as a digital nudge. While the design effectively reduced Facebook use, its removal left many participants feeling upset, dissatisfied with their online experience, and frustrated by the digital nudge. According to Caraban et al. (57), one possible explanation for this lack of sustainability is that participants experienced a perceived infringement on their autonomy due to the friction introduced by the nudging design. To similarly curb mindless, compulsive behavior on digital platforms, Wang et al. (61) created a Chrome plugin that introduced a 10-s delay before uploading a Facebook post, encouraging users to review the content more carefully. Although the timer could be ignored, they found that many users modified or discarded their posts during the delay period. However, Wang et al. (61) also found that many participants were annoyed and perceived the nudge as time-consuming. To avoid becoming a form of manipulation, nudges should always allow for easy opt-out options. In addition, interventions must remain transparent, with users’ well-being consistently taken into consideration (62). A nudge should clearly reveal its purpose so that users can reasonably understand “the intention behind it, as well as the means by which behavioral change is pursued, could reasonably be expected … as a result of the intervention” (63). In agreement with Thaler and Sunstein’s (43) suggestion for a more proactive approach in certain cases, Purohit et al. (64) proposed that digital users should actively participate in the creation of digital detox nudges. This not only helps preserve personal autonomy but also addresses data privacy concerns and reduces usability risks that may compromise the effectiveness of the intervention.
As Zimmermann and Sobolev’s (56) study found no immediate causal impact of screen time reduction on subjective well-being or academic performance, findings that directly contradict prior research linking excessive screen time to sleep disturbances, the present author conducted a modified replication of Zimmermann and Sobolev’s (56) study. This replication focused on two key research questions: (a) assessing the effectiveness of implementing both App Limits and Downtime as active digital nudges to reduce screen time on mobile phones throughout the day and before bedtime, and (b) examining the potential correlation between reduced screen time and sleep quality. Hence, the purpose of this study is to examine the effectiveness of digital nudging as a behavioral intervention to mitigate compulsive and addictive digital behaviors, specifically targeting the excessive screen time (e.g., engaging in cyber-leisure activities) just before bedtime.
Method
Participants and sampling
We recruited a convenience sample with only two selection criteria: (i) participants had to be between 19 and 30 years old, as young adults are not only considered “digital natives”—having been born into and raises with digital technology, and thus more likely to engage in extensive and diverse smartphone use compared to older adults (65, 66; as cited in 67)—but also tend to suffer more from sleep deficiency and poor sleep quality (68); and (ii) participants had to be active iPhone users (as opposed to Android or other operating systems users), due to limitations in standardizing the screen time setup instructions across Android devices, where the function names and interfaces vary by brand, unlike the more consistent setup on iOS.
To ensure that each experimental condition included at least n = 10 participants, we aimed to recruit as many participants as possible by broadening the target population to young adults residing in the European Union. All individuals who provided consent (N = 22) were enrolled in the study and directed immediately from the consent form to the baseline survey.
The information and consent form were written in English and publicly shared on the first author’s Facebook profile and Facebook groups, accompanied by a brief description of the study’s purpose and participation criteria.
No stratification based on population characteristics was applied. Convenience sampling without stratification poses a threat to the external validity of the results (i.e., higher risk of sampling bias and lack of diversity in the sample—see also 69). Therefore, this study makes no claim of representativeness and should be regarded as exploratory.
Participants were then randomly assigned to one of two experimental conditions and sent setup instructions via the email addresses they provided in the consent form. Simple randomization was carried out using a die roll: participants who rolled even numbers were assigned to the tracking-only control condition, while those who rolled odd numbers were assigned to the active nudge condition. The final sample for the follow-up survey consisted of 17 participants (M age = 25.5 years). Participants who did not confirm completing the setup of the Screen Time function, did not complete the follow-up survey, or failed to follow the instructions and complete the project within 7 days were excluded. Unfortunately, no data on participants’ gender were collected. According to Zimmermann and Sobolev (56), previous research on screen time reduction interventions reported an effect size (Cohen’s d) between 0.4 and 0.5. Thus, assuming a sample of 17 participants per condition, our study had sufficient power (62%) to detect an effect in a paired samples t-test. This limited statistical power represents a constraint of the study and is discussed further below. In fact, low statistical power reduces the likelihood that statistically significant findings reflect true effects (70).
In their original study, Zimmermann and Sobolev (56) included a control condition, an active nudge condition, and a passive nudge condition. Considering potential limitations anticipated in the present project, which are discussed further in the Discussion, we chose to replicate Zimmermann and Sobolev’s (56) study using only a tracking-only control condition and an active digital nudge condition.
Experimental design
Participants who completed the pretest survey were subsequently randomly assigned to one of two conditions: (1) control condition (n = 10) and (2) active nudge condition (n = 12). The control condition was referred to as the “tracking condition” and the active nudge condition as the “self-commitment condition” in the instructions sent out to participants.
In the control condition, participants were briefly introduced to the Screen Time function and then instructed to track their screen time daily. They were asked to open the function and take a screenshot of their total smartphone usage each night before going to sleep. In the active nudge condition, participants received step-by-step instructions: first, to identify apps they found addictive or detrimental to productivity, and then to set screen time reduction using Downtime and App Limits features within the Screen Time function. Participants were encouraged to use both Downtime and App Limits for optimal results, and the follow-up survey included questions to confirm whether both features had been activated. The instructions explained how to use Downtime to block screen access during specific periods—such as for work, study, or sleep—and how to use App Limits to set daily time restrictions for specific apps or app categories. Clear examples were provided for each condition.
After 7 days of implementing the intervention, participants were asked to complete a posttest survey. The final sample included n = 9 in the control condition and n = 8 in the active nudge condition. No personal data were collected in either the baseline survey or follow-up survey that could identify or match participants; thus, the entire procedure was anonymous. Table 1 provides a complete overview of the sample characteristics collected in the baseline survey.
Materials and procedure
In compliance with personal data protection regulations in Norway, this study was submitted for assessment to the Norwegian Agency for Shared Services in Education and Research (SIKT). The management plan for processing personal information received a positive assessment with reference number 594471. Moreover, because the project collected self-report data on sleep quality, it was sent for preassessment to the Regional Committee for Medical and Health Research Ethics, which concluded that no formal approval was required (reference number 600806). A risk and vulnerability analysis of the management of personal information was performed according to the university’s guidelines and deposited on a public repository system with registration number 20/10901-138.
Description of screen time function
The iOS operating system includes the Screen Time function, which gives users a visual representation of their daily and weekly screen usage. Overall screen time data on an iPhone is divided into categories based on app functions, including social networking, productivity, gaming, and reading. Users also have access to data on what apps they use the most frequently, how often they pick up their phone, and how many notifications they get throughout the day. These features enabled participants in the control condition to continue to observe how they were using their mobile devices. In the active nudge condition, two time-management features within the Screen Time function were utilized. The “Downtime” feature allows individuals to schedule designated periods of time when only phone calls and specific apps are accessible, effectively creating a block on screen usage. The “App Limits” feature allows users to set daily time restrictions for categories such as social networking or specific apps that they feel are most addictive. These limits reset every day at midnight, and if a person surpasses their set limit, the app is blocked, accompanied by a notification to ignore limits for 1 min, 15 min, or the whole day. Users have the flexibility to extend or remove the time limits they have set for themselves, so the limits and downtime are not strictly enforced. Due to its liberty-preserving characteristic, the Screen Time function was selected as the digital nudge for this study.
Pretest survey
Data collection was handled by questionnaires created with nettskjema.no, a survey solution developed and hosted by the University of Oslo (bmV0dHNramVtYUB1c2l0LnVpby5ubw==). After providing informed consent via the registration form, participants were directed to the pretest survey, which included questions about their self-perceived current smartphone usage and sleep quality. The survey began with a question regarding previous use of Screen Time, followed by several exploratory items assessing their ability to focus, rated on a 5-point Likert scale. Subsequently, respondents were asked to subjectively estimate their total daily screen time on a smartphone in hours and minutes, as well as the usual number of pickups triggered by notifications. They also subjectively estimated the percentage of their productive smartphone usage (0%–100%), rated their enjoyment of leisure activities on a smartphone unrelated to work or study using a 10-point Likert scale, and reported the amount of screen time spent on leisure activities before bedtime. Another exploratory question assessed respondents’ expectations regarding the effectiveness of three techniques for reducing mobile phone screen time: (1) “receiving detailed information about their individual mobile phone usage”, (2) “setting time limits for specific apps”, and (3) “designating a specific period for each day to stay away from mobile phone screen”. Each technique was rated on a 10-point Likert scale. The scoring range used to compare both the 5-point Likert scale and 10-point Likert scale in the pretest survey was calculated according to Wu (71) and is included in the Appendix. Data on the overall sleep quality of participants were collected via six measures of sleep quality derived from the Pittsburgh Sleep Quality Index (PSQI) (72). Smartphone dependence was assessed using an 11-item scale adapted from Ward et al. (73).
Posttest survey and screen time
After being introduced to control and active nudge conditions, participants were given a posttest survey with questions regarding the project, confirmation of compliance, screen time data, and sleep quality measures after 7 days of the project. All participants were asked to provide objective information on their daily total screen time in the last 7 days of the project in hours and minutes, retrieved from data on the Screen Time app. While proofs in the form of screenshots of screen time usage were not asked for to minimize the complications of the report for participants, this should be required in future research and replications to ensure objective data. The primary dependent variable was the average daily screen time measured in minutes. Subsequently, these data were used to calculate the average daily screen time variable for each participant, enabling a comparison of smartphone usage before and after the nudging intervention.
The posttest survey tailored different questions for each condition. Participants in the control condition rated the extent to which observing their detailed mobile phone usage every day helped them to reduce their screen time in a whole day and before bedtime (1 = not effective at all, 10 = totally effective). Meanwhile, participants in the active nudge condition described how they set up their goals by using Downtime and App Limits, whether they set to use these features every day or only on weekdays, how much the time limits were, and whether they turned on “Block at End of Limit” when using App Limits—which added an extra control factor when applying this type of nudge. Participants in the active nudge condition were later asked how frequently they had broken their own limits and ignored their downtime (1 = never, 5 = about half of the time, 10 = every day), and whether using these features had enabled them to cut back on their screen time before bedtime and during the day (1 = not effective at all, 10 = totally effective).
All participants submitted an estimate of how much time they spent using their smartphones for work-related or productive purposes during the project (ranging from 0% to 100%), how committed they were to lowering their screen time (1 = not much, 10 = absolutely committed), and how much they enjoyed using their phones for recreational purposes during the project (1 = not at all, 10 = very much). Additionally, they ranked how much they thought the Screen Time function had helped them to curb their screen time before bed and during the day (1 = certainly not, 10 = definitely yes). Finally, the identical set of questions from the pretest survey was answered by each participant to gauge their overall quality of sleep. They answered a demographic question regarding their age to conclude the posttest survey.
Overall sleep quality
Overall sleep quality is measured based on the PSQI, which is a set of nineteen self-rating questions assessing sleep quality and disturbances (72). We extracted six items from the PSQI to measure overall sleep quality: subjective sleep quality, sleep latency, sleep duration, sleep disturbances, bedtime, and wake-up time. For this study, sleep disturbances were specifically asked for, measured as the frequency of sleep delay due to screen time as leisure activities before bedtime. According to the PSQI (72), overall sleep quality is a composite score measured by first assigning a component score to each question with a range of 0–3 points and then adding the component scores together. This final composite score, with a range of 0–21 points, indicates 0 as having no difficulty in sleep and 21 as having severe difficulties in all areas of sleep.
We extended the range of the sleep quality response options and different scales with the intention of increasing the sensitivity of responses and offering participants more independence to choose exactly what they prefer (74). Subjective sleep quality was measured on a 10-point Likert scale with 1 as absolutely bad sleep and 10 as absolutely good sleep, sleep latency was measured on a 5-point scale (1 = 15 min or less, 5 = more than 2 h), 4-point scale for sleep duration (1 = more than 7 h, 4 = less than 5 h), 7-point scale for bedtime (1 = 9–10 pm, 7 = after 3 am), 6-point scale for wake-up time (1 = 5–6 am, 6 = after 10 am), and 4-point scale for sleep delay frequency (1 = not usually, 4 = every day). Although all components—except for subjective sleep quality—were worded and categorized according to the PSQI, with lower values indicating better sleep patterns and higher values reflecting problematic sleep quality, the scores needed to be recalculated on a 10-point scale to allow comparison of overall sleep quality before and after the intervention.
Data preparation and analysis
The data were analyzed using IBM SPSS Statistics (Version 27). The subjective sleep quality variable, which was originally contradictorily worded, underwent reverse recoding in the SPSS data file. This recording was done to assign a value of 1 to represent absolutely good sleep and a value of 10 to represent absolutely bad sleep. The mean value for each component of the overall sleep quality was then computed and converted from the component’s original scale to an equivalent value on a 10-point Likert scale. This transformation was employed to ensure that all items were measured on a consistent scale. We derived a composite score by summing the scores from all the component scores to evaluate overall sleep quality.
Paired samples t-tests were used to analyze the data before and after the intervention. Independent samples t-tests were used to analyze the data between conditions. Spearman’s rank-order correlations were run to examine the relationships between average daily screen time and measures of overall sleep quality before and after intervention.
Within-group comparisons on the primary outcome (i.e., screen time) were performed using nonparametric statistics due to both small sample size and nonnormal distribution of posttest scores (results are reported below). Between-group comparisons on the secondary outcome and the exploratory outcomes were performed using parametric statistics. Although the t-test assumes normality of the data, the test seems to be robust to moderate violations1 of this assumption, particularly with sufficiently large sample sizes (76). In line with the recommendations of Skaik (77), the t-test can be used to analyze samples larger than 15 on the condition that there are no severe outliers, as in our dataset.
For all performed tests, we report whether the test meets the normal distribution assumption by including the results of Shapiro–Wilk tests (primary and exploratory outcomes) or checking the equality of variances through Levene’s test (secondary outcome). Effect size results are consistently reported in the form of Cohen’s d, irrespective of their statistical significance, and interpreted based on Cohen (78). This practice is in line with the recommendations of, among others, Sullivan and Feinn (79) and Maher et al. (80) and aims at enhancing transparency of findings and reaching beyond the binary decision departing from the null hypothesis to include a measure of magnitude (81).
Results
Primary outcome: screen time
The sample’s mean estimated daily screen time before the intervention was recorded as 278.95 min (SD = 101.97, N = 22). Following the intervention, the average daily screen time increased to 312.88 min (SD = 158.69, N = 17). Given the small sample size, determining the distribution of the variable screen time was important for choosing an appropriate statistical method. A Shapiro–Wilk test was performed and showed that the distribution of pretest screen time was normally distributed (W = 0.965, p = 0.733), but the variable posttest screen time departed significantly from normality (W = 0.869, p = 0.022). Based on this outcome, a non-parametric test was used, and the median with the interquartile range was used to summarize the variable screen time. A Wilcoxon signed-rank test showed that our intervention did not elicit a statistically significant change in screen time reduction (Z = − 0.497, p = 0.619). Median pretest screen time was 300 min, and posttest screen time was 288 min. We applied the formula Z/√n to calculate the effect size, which returned a value of 0.012 (i.e., a small effect, according to Cohen’s classification of effect sizes).
Next, we compared the average daily screen time between the nine participants in the control group (M = 299.29, SD = 148.33) and the eight participants who performed the active nudge intervention (M = 328.29, SD = 178.63). We performed a new Shapiro–Wilk test to check that the normality assumption was met. While the results in the tracking-only condition were normally distributed (W = 0.922, p = 0.407), the results in the active nudge condition were not normally distributed (W = 0.814, p = 0.040). Thus, we analyzed between-groups differences using a Mann–Whitney U test, which did not indicate any significant difference (U = 31, p = 0.630). The effect size was calculated using the same formula reported above and resulted in 0.117 (i.e., a small effect, according to Cohen’s classification of effect sizes). This result suggests that the active nudge condition, which incorporated both time limits and a downtime schedule, was no more effective in reducing screen time than the control condition, which involved only self-monitoring and observing screen time usage.
To gain an overview of the average screen time progression over a span of 7 days for participants in both conditions, a line graph was employed. Participants’ screen time in the control condition showed minimal variation across the 7 days, while participants with app limits and downtime displayed a substantial reduction in their usage after the first 2 days of intervention. However, the trend witnessed a slight growth on the final day of the project (see Figure 1 for details). The results of the nonparametric statistical analyses for both the within-group (a) and between-group (b) analyses are included in Table 2.

Figure 1. Screen time progression over 7 days. Average daily screen time (in minutes) across 7 days for the control condition group and the active nudge condition group.

Table 2. Results of parametric (a) and non-parametric (b) repeated measures test of the primary outcome (screen time).
Secondary outcome: overall sleep quality
An independent samples t-test was carried out to investigate whether there was any statistically significant difference between control and active nudge conditions for sleep quality variables. The Levene’s test for equality of variances for subjective sleep quality, sleep duration, bedtime, wakeup time, and frequency of sleep delay because of screen time, respectively, had a p-value of 0.308, 0.649, 0.803, 0.067, and 0.150. Since these values were greater than 0.05, there were no statistically significant differences in the variances between the two groups of conditions. Equal variances, however, did not meet the assumption in the Levene’s test for the sleep latency variable as the p-value was 0.021.
The two-tailed p-value for subjective sleep quality, sleep duration, bedtime, and wake-up time measures indicated that there was also no statistically significant difference in the means between the two conditions either, with medium to large effect sizes as measured by Cohen’s d, ranging from d = 0.62 (medium) for sleep latency, d = 0.67 (medium) for usual bedtime, d = 0.91 (large) for sleep duration, d = 0.96 (large) for wake-up time, and d = 1.01 (large) for subjective sleep quality.
Although the variances between the two conditions were likely equal for the frequency of sleep delay because of screen time variable, the results indicated that participants in the active nudge condition (M = 1.38, SD = 0.518) had a significantly lower frequency of sleep delay because of the screen time in comparison with participants in the control condition (M = 2.22, SD = 0.833); t(15) = 2.477, two-tailed p = 0.026. The effect size was large, with a Cohen’s d of 1.20 (see Table 3 for more details).

Table 3. Results for pre- and posttest survey on sleep quality measures and exploratory measures across conditions.
In the pretest data, no correlation was found between average daily screen time and usual bedtime, sleep latency, sleep duration, sleep disturbance, or subjective sleep quality. However, there was a moderate negative and significant correlation between average daily screen time and usual wake-up time, rs(20) = − 0.43, p = 0.043 (i.e., medium to large effect). The posttest data, on the other hand, did not reveal any significant correlation between average daily screen time and any of the measures of overall sleep quality (please see Table 4 with a complete correlation matrix).
After being calculated and converted into equivalent values on a 10-point Likert scale, all six component scores were added up to be a final composite score that represents overall sleep quality. The results showed that the composite score in the pretest is 29.6 and 29.2 in the control condition, while it is only 20 in the active nudge condition (on a scale of a maximum of 60 points from six components). The interpretation of this composite score followed PSQI guidelines, where higher scores indicate poorer overall sleep quality and lower scores reflect better sleep quality.
Exploratory outcomes
Effectiveness expectations before the intervention
There were no significant disparities in the perceived effectiveness of the three distinct Screen Time features aimed at reducing screen time. App Limits (M = 5.68, SD = 2.607) and Downtime (M = 5.86, SD = 2.981) were not expected to be more effective than the tracking-only technique (M = 5.82, SD = 2.423), which solely involves self-control observation.
Acceptability and perceived effectiveness after the intervention
Approximately half of the participants in the active nudge condition reported surpassing and disregarding their self-set goals for limits and downtime around 50% of the time (M = 4.5, SD = 1.69). Additionally, frequencies indicated that roughly 38% of participants in the active nudge condition found App Limits useful for reducing screen time throughout the day (M = 7.25, SD = 2.816), while half of the participants agreed that App Limits effectively reduced screen time before bedtime (M = 8.38, SD = 1.3). More than 70% of participants in the active nudge condition rated Downtime as moderately to certainly effective and useful for reducing screen time both throughout the day (M = 7.75, SD = 2.375) and before bedtime (M = 8.63, SD = 0.916). Meanwhile, over 40% of participants in the control condition rated that tracking-only feature as moderately ineffective or having no effect on reducing screen time throughout the day (M = 6.33, SD = 2.45).
In general, we assessed whether the perceived efficacy of utilizing the Screen Time function to reduce overall screen time throughout the day was normally distributed using the Shapiro–Wilk test. The results (W = 0.942, p = 0.602 for the tracking-only condition; W = 0.824, p = 0.051 for the active nudge condition) indicated that the data were approximately normally distributed. An independent t-test revealed no significant difference between the active nudge condition and the control condition (M_control = 6.44, M_active = 7.38, t(15) = 0.734, p = 0.474). Cohen’s d returned a value of 0.357, indicating a medium effect size.
Next, we tested the normality of the data for the effectiveness of using screen time to reduce screen time before bedtime with a Shapiro–Wilk test, which indicated that this assumption was met (W = 0.895, p = 0.223 in the tracking-only condition and W = 0.860, p = 0.120 in the active nudge condition). The results of an independent t-test indicated that participants in the active nudge condition indicated a significantly greater perception of the effectiveness of Screen Time in reducing smartphone usage before bedtime compared to participants in the control condition (M_control = 5, M_active = 9, t(9.867) = 3.459, p = 0.006). The effect size was large based on Cohen’s d = 1.598.
The data for participants’ commitment to reducing screen time met the normality assumption in the tracking-only condition, as indicated by the results of a Shapiro–Wilk test (W = 0.948, p = 0.666), but failed to do so in the active nudge condition (W = 0.724, p = 0.004). Notably, there were no significant differences in the level of commitment to reducing screen time between the two conditions (M_control = 5.67, M_active = 7.5, t(9.554) = 2.058, p = 0.068)2.
Productivity and leisure enjoyment
Participants in the active nudge condition demonstrated a significantly greater percentage of productive time during the nudging intervention compared to participants in the control condition (M_control = 34.44, M_active = 53.13; t(15) = 2.218, p = 0.042). These data were not normally distributed according to the results of a Shapiro–Wilk test in the tracking-only condition (W = 0.781, p = 0.012), but they did follow a normal distribution in the active nudge condition (W = 0.847, p = 0.088). The effect size was large with a Cohen’s d of 1.078.
However, the level of enjoyment toward leisure activities did not differ between the two conditions (M_control = 6.78, M_active = 57.50; t(15) = − 1.097, p = 0.290). These data were tested for normality and met this assumption through the results of a Shapiro–Wilk test (W = 0.903, p = 0.273 in the tracking-only condition and W = 0.935, p = 0.563 in the active nudge condition).
Discussion
The primary focus of the thesis was to investigate the potential for reducing excessive smartphone usage through the implementation of active digital nudges. Additionally, the study aimed to explore the correlation between screen time and overall sleep quality. This research was conducted as a systematic replication of Zimmermann and Sobolev’s (56) original study, which yielded positive outcomes. Although exploratory due to sampling limitations, the findings of this study suggest that screen time may be influenced by active nudges, albeit to a moderate degree. Results from the independent t-test addressing the primary outcome showed no statistically significant effect of active nudges on screen time reduction. However, active digital nudges appeared to contribute to a noticeable decrease in screen time after a few days.
This result is inconsistent with the findings of Zimmermann and Sobolev (56). Although the experimental conditions were similar, the discrepancy may be attributed to the small sample size in our study. Recruitment and follow-up were hindered by the level of engagement required to implement active nudges and reflect on phone usage. According to Sunstein’s (82) explanation of why nudges may fail, a nudge can be ineffective when the perceived cost of rejecting it is minimal and loss aversion is irrelevant. In the context of this thesis, it is possible that default rules such as time limits and prescheduled downtime do not exert sustainable influence on the desired behavior. This lack of influence may arise because individuals do not perceive any tangible negative consequences, either economically or mentally, when they intuitively reject these nudges. The rational evaluation of the costs and benefits associated with ignoring active digital nudges might also be affected by the availability heuristic (82). For instance, individuals may experience inertia during the day, leading them to compulsive and habitual screen checking that gives immediate access to satisfaction rather than engaging in thoughtful reflection on the advantages of reducing screen time. Moreover, Hummel and Maedche (83) posited that the effect of nudges does “not only depend on the nudge itself but also on how it is perceived by an individual”, as many studies indicated that the effectiveness of nudging can be moderated by strong personal preferences (82).
Active nudges
The active nudges employed in this experiment adhered strictly to nudging principles. They were designed to be simple to opt out of and did not cause any intense friction, allowing participants to continue enjoying their leisure time on screen. Unlike other digital nudging approaches that required participants to install external applications or plugins to activate interventions, this experiment took advantage of built-in features on iPhones as cost-free solutions to address mobile overuse and mitigate the risk of personal data breaches. Nevertheless, it is this soft paternalistic nature of the nudging approach without any form of compliance or coercion that indicates a substantial lack of the control required to be a complete behavioral modification intervention. Hayes and Brownstein (84) stated firmly that the behavior-analytic perspective for the goals of science emphasizes control in the inclusion of prediction. The emphasis on control is for the demonstration of the direction of influence between variables that are assumed to be functionally related. Strict control of behavior through environmental variables is extremely difficult to achieve under nonexperimental conditions. This point of view has been adopted by most behavior analysts working in the applied branch of science; here, we prefer the more palatable term of influence.
The old pattern of phone usage began to gradually recover after a few days, even when the active nudges seemed to take effect. This could be attributed to how frequently the app limits and downtime were broken and ignored by participants. Specifically, 50% of the participants (N = 8) admitted to having ignored the limits and downtime half of the time during the 7-day project duration. Further strategies for the implementation of nudges are thus needed to establish the long-term effects of nudging. Simon and Tagliabue (48) suggested a behavior-analytic viewpoint that entails designing a change in the contingency between choices and consequences is necessary to achieve and sustain the desired behavior change when a nudge fails. In fact, no behavior change is maintained without behavior being brought into contact with its reinforcement contingencies. The authors suggest that more adaptive nudging, such as setting up microgoals as reinforcement loops or mood-tracking prompts to offer personalized reminders whenever stress or fatigue from excessive screen time use occurs, might be helpful to improve the practical prevalence of future digital active nudges. For example, an app called “Forest”, which is available for both iOS and Android operating system devices, offers certain features for personalized nudges to concentrate better without smartphone overuse. The app gives users the choice of building their own garden by choosing a certain type of tree or flower to grow during their focus time. Each type of flower requires a different amount of time to grow. For example, tulips only require half an hour to fully grow, while peonies need 1 h to bloom to full size. Users can choose the type of flower depending on how much time they want to dedicate fully to their work, and if they exit the app at any point during the growing period to use other apps, the flower will wither, and they will lose their progress with that flower. Each flower and tree, therefore, functions as a microgoal, and the Forest app also has a Friends mode so that people can add each other and see each other’s progression of focusing on this app. This can be considered a nudge that is adaptive to those who favor the aesthetic of a digital garden with no monetary incentives, easy to opt out, totally transparent, and compliant with nudging principles.
After the active nudge as an independent variable was introduced, there was not much difference in sleep quality component measures, as expected, compared with those in the control condition, except for a significant improvement in the frequency of sleep delay because of screen time. This result was in line with results from previous studies, which found that personal phone overuse was associated with sleep disturbances for young adults (17, 20).
Decreased sleep disturbances also illustrate the positive perceived effectiveness of the nudging intervention in reducing screen time before bedtime from participants in the active nudge condition. Quite surprisingly, further results from Spearman’s rank-order correlation tests did not show any relationship between average screen time and any of the overall sleep quality measures. Sleep quality is a complex construct with many dimensions to it. On the other hand, nudges are relatively simple and targeted interventions and may affect only a limited aspect of sleep quality (e.g., sleep latency or frequency of sleep delay). Thus, more complex behavioral repertoires may benefit from a nudge “plus” initiative, which incorporates elements of reflection to improve its effectiveness and aligns it better with the agent’s autonomy and agency (85). From a behavioral perspective, these nudges are not only featured by acquiring stimulus control but also stimulus generalization: learning to respond to a wider set of stimuli compared to the original setting (86).
Wake-up time was negatively correlated with daily screen time before the intervention; however, after the intervention, there was no evidence that any changes in average daily screen time would harm sleep quality. The overall sleep quality appeared to be better for participants in the active nudge condition and participants before the intervention, as those who were in the tracking-only condition seemed to have more difficulties sleeping. Mixed results of nudging on sleep quality through smartphone use were also shown in the work by Olson et al. (87), with small changes in their participants’ cognition and mood. Moreover, they reported that smartphone use among their participants reverted to preintervention levels 6 weeks after the onset of their nudge-based intervention.
Study limitations
We referred to our study as exploratory due to three main limitations: the small sample size, the self-report nature of the data collection, and the short duration of the intervention.
The first limitation of our study was the small sample size, which reduces the robustness and the generalizability of the findings. Small sample size increases the likelihood of type II errors (false negatives; see 88).
Although we did not find a statistically significant main effect of the active nudge, the small sample size does not pose as great a threat to our conclusions as it would have if a significant effect had been observed. Nevertheless, very small sample sizes undermine both the internal and external validity of the research (89). It is possible that the results were influenced by sampling bias, as participants were primarily recruited from the first author’s network. Moreover, gender information was not collected, and due to the small sample size, no inferences could be made regarding potential gender effects on the influence of the independent variables on any of the dependent measures.
Future research should adopt broader recruitment strategies to increase sample size and power analysis, and to diversify the participant pool by including both iOS and Android users, thereby reducing biases associated with convenience sampling. As Eysenbach (90) noted, “conclusions drawn from a convenience sample are limited and need to be qualified in the Discussion section of a paper”. Accordingly, future studies could follow the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) when designing and administering online questionnaires in the medical field.
The second limitation concerns the self-report nature of data collection, which may threaten the internal validity of our findings. For example, participants were not required to submit screenshot confirmations or other forms of objective verification when reporting their dependent measures. While this approach likely reduced response burden and increased participation, it may also have compromised the accuracy or truthfulness of the data provided. Grayscale mode, used as a passive nudge in the original study, was not included in this study due to its absence in later versions of iOS. Consequently, this replication included only a control condition and an active nudge condition.
The third limitation concerns the duration of our study: it was conducted in a much shorter period (1 week) than the original study of Zimmermann and Sobolev (56), and this may explain the insignificant effect of active nudges in this thesis, as the active nudges were indicated to have a more gradual effect over a span of 3 weeks. A high percentage of participants claimed that they have used the Screen Time function before, and the transparency in the purpose of the study may result in biases from participants’ “perceived expectations of improvement” (50). The prior use of the Screen Time function might be a confounding variable to be controlled for future replication of this study.
The imprecision and inconsistency of scales in measurement tools in this experiment are another possible confounding variable. The data preparation procedure taken to translate the sleep quality component scores into more comprehensive and comparable values, as well as the calculation of the composite score for overall sleep quality, are certainly not the most rigorous assessments, but it was our best attempt to correct for some of the issues with the original PSQI. The original tool gave instructions to calculate a component of sleep quality called “habitual sleep efficiency” as a percentage by dividing the actual hours slept (sleep duration) by the total hours spent in bed, which equals usual wake-up time minus usual bedtime (72). However, as this study followed the original tool to measure sleep duration with values assigned to the Likert-type scale as an approximate range, for example, 1 as more than 7 h and 2 as from 6 to 7 h, the data collected were not precise to the minute for appropriate calculation. After careful consideration, we decided to calculate and compare composite scores of overall sleep quality, a measure that was consistent across participants, even while it contains validity issues. This scoring method was not subject to validation, and future studies should address the lack thereof. When comparing it with other sleep quality assessments, the PSQI showed a high correlation with the Sleep Quality Scale (SQS; 91), and the Sleep Regulatory Questionnaire 92) was introduced to measure subjective sleep regularities.
Sohn et al. (93) emphasized that the duration of smartphone use alone does not indicate addiction but rather reflects an increased likelihood of developing behavioral addiction patterns. Therefore, future research should investigate whether a broader range of smartphone usage patterns is more strongly associated with smartphone addiction and its potential interaction with adverse effects on sleep and other health outcomes. There is a strong demand for a deeper understanding of the underlying mechanisms of smartphone overuse and for the development of effective interventions, both of which are of theoretical and practical significance.
Directions for future research
Our study represents a systematic replication of Zimmermann and Sobolev (56), with deliberate modifications made to specific elements of the original design. These changes were deemed necessary due to the considerable challenges of replicating the study in natural settings, where numerous uncontrolled variables are prevalent. Furthermore, direct replications, especially intrasubject replication studies, may raise ethical concerns regarding their implementation. Conducting the experiment repeatedly may exacerbate potential damage to the subject by inducing irreversible behavioral changes (94, as cited in 95).
Studies involving humans also require careful consideration of the complexities associated with their “lengthy, varied and unknown histories that might interact in significant ways with experimental procedures” (95), as a single participant may respond differently under even subtle forms of pressure. For instance, participants’ perceptions of what constitutes a good night’s sleep may influence both the perceived effectiveness of a digital nudge aimed at improving sleep quality. In this particular experiment, individual identities could have been matched for analytical purposes by asking participants to provide their email addresses in both the pre- and posttest surveys. However, to minimize the collection of personal information and for ethical considerations, the project only collected names and email addresses in the information and consent form. As a result, participants cannot be identified in either the pre- or posttest survey. Only data collected from the pre- and posttest surveys were transferred from the Excel files generated by Nettskjema, and only numerical data were used for analysis in this study. This procedure was specified in advance during the project evaluation by SIKT and complied with the framework for balancing data openness with participant confidentiality in psychiatric and behavioral research, as proposed by Zhang et al. (96), classifying the data as ranging from moderately sensitive to minimally sensitive.
Further research should also address the diminishing effect of the nudge as participants progressed through exposure to the independent variable (see also 97). From a functional perspective, this may be attributed to the fading of the stimulus control exerted by the change in choice context. In other words, participants may have become so accustomed to the nudge that it no longer influences their behavior. Alternatively, the immediate and gratifying consequences associated with the unwanted behavior may have outweighed the delayed but healthier outcomes promoted by the nudge. This phenomenon is known as temporal discounting, wherein individuals choose smaller-sooner rewards over larger-later ones (98). Therefore, for future interventions to succeed, it is important to design nudges that not only initiate behavioral change but also help maintain it over time, especially in the face of temptations and competing reinforcers—potentially by incorporating elements such as boosts (see 99).
Finally, nudge interventions raise some questions regarding their foundational principles. One such issue is the preservation of individual freedom of choice, which is considered a core criterion for an intervention to qualify as a nudge. A key point of debate is the extent to which nudges implemented by third parties, such as governments or policymakers, can genuinely uphold individual autonomy and freedom of choice (100). Even when the choices offered are not restricted by explicit obstacles, the architecture in which they are presented may still constrain an individual’s autonomy.
Nudges can obscure transparency by obfuscating choices at the moment of decision-making, thereby impeding freedom of choice (101, as cited in 100). Ethical implications of nudge interventions—whether from policymakers or the private sector—should be seriously considered from the outset of the conceptual design phase.
Data availability statement
The datasets presented in this article are not readily available because all raw data were deleted after completion of the study, which was originally a graduate thesis. However, sav. files will be shared upon reasonable request. Requests to access the datasets should be directed to bWFyY28udGFnbGlhYnVlQG9zbG9tZXQubm8=.
Ethics statement
Ethical approval was not required for the study involving humans in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
TV: Data curation, Writing – review & editing, Methodology, Software, Investigation, Writing – original draft, Validation, Resources, Conceptualization, Visualization, Project administration, Formal analysis. MT: Data curation, Funding acquisition, Writing – review & editing, Software, Supervision, Investigation, Conceptualization, Project administration.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. APCs were funded by OsloMet - Oslo Metropolitan University.
Acknowledgments
The present text was adapted from the first author’s master’s thesis. The authors are grateful to Gunnar Ree for his comments on a previous draft of this manuscript.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declare that Generative AI was used in the creation of this manuscript. Chat GPT accessed via Microsoft Copilot was used to proofread the manuscript. The authors reviewed the content and take responsibility for the content of the publication.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1602997/full#supplementary-material
Footnotes
- ^ Moderate violations are assessed based on the researcher’s interpretation of how strongly the trait of interest deviates from normality: for example, when data are not extremely nonnormal (Chi-squared, bimodal, or longtailed distributions; see 75).
- ^ We performed a nonparametric Mann–Whitney U test on the variable “commitment level of reducing screen time” following the failure to meet the normal distribution assumption in the active nudge condition, but this returned similar results to the t-test (U = 20, p = 0.109, Cohen’s d = 0.044–small).
References
1. Agar J. Constant touch: A global history of the mobile phone. London, United Kingdom: Icon Books Ltd (2013).
2. Davis K. Friendship 2.0: adolescents' experiences of belonging and self-disclosure online. J adolescence. (2012) 35:1527–36. doi: 10.1016/j.adolescence.2012.02.013
3. Dienlin T and Johannes N. The impact of digital technology use on adolescent well-being. Dialogues Clin Neurosci. (2020) 22:135–42. doi: 10.31887/DCNS.2020.22.2/tdienlin
4. Marciano L, Driver CC, Schulz PJ, and Camerini A-L. Dynamics of adolescents’ smartphone use and well-being are positive but ephemeral. Sci Rep. (2022) 12:1316. doi: 10.1038/s41598-022-05291-y
5. Longhini J, Marzaro C, Bargeri S, Palese A, Dell’Isola A, Turolla A, et al. Wearable devices to improve physical activity and reduce sedentary behaviour: an umbrella review. Sports Med - Open. (2024) 10:9. doi: 10.1186/s40798-024-00678-9
6. Brand M, Wegmann E, Stark R, Müller A, Wölfling K, Robbins TW, et al. The Interaction of Person-Affect-Cognition-Execution (I-PACE) model for addictive behaviors: Update, generalization to addictive behaviors beyond internet-use disorders, and specification of the process character of addictive behaviors. Neurosci Biobehav Rev. (2019) 104:1–10. doi: 10.1016/j.neubiorev.2019.06.032
7. World Health Organization. International Classification of Diseases for Mortality and Morbidity Statistics (11th Revision) (2020). Available online at: https://icd.who.int/browse11/l-m/en.
8. Dresp-Langley B and Hutt A. Digital addiction and sleep. Int J Environ Res Public Health. (2022) 19:6910. doi: 10.3390/ijerph19116910
9. Martínez-Larrínaga A, Martín-Laña N, Lavín B, Arce-Larrory O, Larrínaga-Undabarrena A, Zabala-Domínguez O, et al. Influence of physical activity and screen time on sleep quality in primary school students. South Florida J Dev. (2024) 5:e4531. doi: 10.46932/sfjdv5n10-037
10. Tang Y-Y, Tang R, and Gross JJ. Promoting psychological well-being through an evidence-based mindfulness training program. Front Hum Neurosci. (2019) 13:237. doi: 10.3389/fnhum.2019.00237
11. Peraman R and Parasuraman S. Mobile phone mania: Arising global threat in public health. J Natural Science Biology Med. (2016) 7:198. doi: 10.4103/0976-9668.184712
12. Zirek E, Mustafaoglu R, Yasaci Z, and Griffiths MD. A systematic review of musculoskeletal complaints, symptoms, and pathologies related to mobile phone usage. Musculoskeletal Sci Pract. (2020) 49:102196. doi: 10.1016/j.msksp.2020.102196
13. Demirci K, Akgönül M, and Akpinar A. Relationship of smartphone use severity with sleep quality, depression, and anxiety in university students. J Behav Addict. (2015) 4:85–92. doi: 10.1556/2006.4.2015.010
14. Issa LF, Alqurashi KA, Althomali T, Alzahrani TA, Aljuaid AS, and Alharthi TM. Smartphone Use and its Impact on Ocular Health among University Students in Saudi Arabia. Int J Prev Med. (2021) 12:149. doi: 10.4103/ijpvm.IJPVM_382_19
15. SecurEnvoy (2012) 66% of the population suffer from nomophobia the fear of beingwithout their phone. https://www.securenvoy.com/blog/2012/02/16/66-of-thepopulation-suffer-from-nomophobia-the-fear-of-being-without-their-phone/.
16. Yildirim C and Correia AP. Exploring the dimensions of nomophobia: Development and validation of a self-reported questionnaire. Comput Hum Behav. (2015) 49:130–7. doi: 10.1016/j.chb.2015.02.059
17. Thomée S, Härenstam A, and Hagberg M. Mobile phone use and stress, sleep disturbances, and symptoms of depression among young adults-a prospective cohort study. BMC Public Health. (2011) 11:1–11. doi: 10.1186/1471-2458-11-66
18. Dresp-Langley B and Hutt A. Digital addiction and sleep. Int J Environ Res Public Health. (2022) 19:6910. doi: 10.3390/ijerph19116910
19. Christensen MA, Bettencourt L, Kaye L, Moturu ST, Nguyen KT, Olgin JE, et al. Direct measurements of smartphone screen-time: relationships with demographics and sleep. PloS One. (2016) 11:e0165331. doi: 10.1371/journal.pone.0165331
20. Levenson JC, Shensa A, Sidani JE, Colditz JB, and Primack BA. The association between social media use and sleep disturbance among young adults. Prev Med. (2016) 85:36–41. doi: 10.1016/j.ypmed.2016.01.001
21. Fossum IN, Nordnes LT, Storemark SS, Bjorvatn B, and Pallesen S. The association between use of electronic media in bed before going to sleep and insomnia symptoms, daytime sleepiness, morningness, and chronotype. Behav Sleep Med. (2014) 12:343–57. doi: 10.1080/15402002.2013.819468
22. Cain N and Gradisar M. Electronic media use and sleep in school-aged children and adolescents: A review. Sleep Med. (2010) 11:735–42. doi: 10.1016/j.sleep.2010.02.006
23. Bozkurt A, Demirdöğen EY, and Akıncı MA. The association between bedtime procrastination, sleep quality, and problematic smartphone use in adolescents: A mediation analysis. Eurasian J Med. (2024) 56:69–75. doi: 10.5152/eurasianjmed.2024.23379
24. Zhang MX and Wu AMS. Effects of smartphone addiction on sleep quality among Chinese university students: The mediating role of self-regulation and bedtime procrastination. Addictive Behav. (2020) 111:106552. doi: 10.1016/j.addbeh.2020.106552
25. Huang T, Liu Y, Tan TC, Wang D, Zheng K, and Liu W. Mobile phone dependency and sleep quality in college students during COVID-19 outbreak: the mediating role of bedtime procrastination and fear of missing out. BMC Public Health. (2023) 23:1200. doi: 10.1186/s12889-023-16061-4
26. Figueiredo S, João R, Alho L, and Hipólito J. Psychological research on sleep problems and adjustment of working hours during teleworking in the COVID-19 pandemic: an exploratory study. Int J Environ Res Public Health. (2022) 19:14305. doi: 10.3390/ijerph192114305
27. Figueiredo S and Kulari G. Sleep preferences and chronotype traits impact on academic performance among university students. Eur J Educ Res. (2024) 13:895–909. doi: 10.12973/eu-jer.13.3.895
28. Lastella M, Halson SL, Vitale JA, Memon AR, and Vincent GE. To Nap or Not to Nap? A Systematic Review Evaluating Napping Behavior in Athletes and the Impact on Various Measures of Athletic Performance. Nat Sci sleep. (2021) 13:841–62. doi: 10.2147/NSS.S315556
29. Scott H and Woods HC. Fear of missing out and sleep: Cognitive behavioural factors in adolescents' nighttime social media use. J adolescence. (2018) 68:61–5. doi: 10.1016/j.adolescence.2018.07.009
30. Hjetland GJ, Skogen JC, Hysing M, and Sivertsen B. The association between self-reported screen time, social media addiction, and sleep among Norwegian University students. Front Public Health. (2021) 9:794307. doi: 10.3389/fpubh.2021.794307
31. Höhn C, Schmid SR, Plamberger CP, Bothe K, Angerer M, Gruber G, et al. Preliminary results: the impact of smartphone use and short-wavelength light during the evening on circadian rhythm, sleep and alertness. Clocks sleep. (2021) 3:66–86. doi: 10.3390/clockssleep3010005
32. Bjorvatn B and Pallesen S. A practical approach to circadian rhythm sleep disorders. Sleep Med Rev. (2009) 13:47–60. doi: 10.1016/j.smrv.2008.04.009
33. Cajochen C, Kräuchi K, and Wirz-Justice A. Role of melatonin in the regulation of human circadian rhythms and sleep. J Neuroendocrinol. (2003) 15:432–7. doi: 10.1046/j.1365-2826.2003.00989.x
34. Dubocovich ML, Rivera-Bermudez MA, Gerdin MJ, and Masana MI. Molecular pharmacology, regulation and function of mammalian melatonin receptors. Front Biosci. (2003) 8:1093–108.
35. Cajochen C, Münch M, Kobialka S, Kräuchi K, Steiner R, Oelhafen P, et al. High sensitivity of human melatonin, alertness, thermoregulation, and heart rate to short wavelength light. J Clin Endocrinol Metab. (2005) 90:1311–6.
36. Norwegian Consumer Council. Deceived by design (2018). Available online at: https://storage02.forbrukerradet.no/media/2018/06/2018-06-27-deceived-by-design-final.pdf (Accessed on February 12, 2025).
38. Lewandowsky S and Hertwig R. Citizens versus the internet: Confronting digital challenges with cognitive tools. psychol Sci Public Interest. (2020) 21:103–56. doi: 10.1177/1529100620946707
39. Fogg BJ. (2009). A behavior model for persuasive design, in: Proceedings of the 4th International Conference on Persuasive Technology. Association for Computing Machinery. New York, NY, United States.
41. Hidalgo-Fuentes S, Martí-Vilar M, and Ruiz-Ordoñez Y. Problematic internet use and resilience: A systematic review and meta-analysis. Nurs Rep (Pavia Italy). (2023) 13:337–50. doi: 10.3390/nursrep13010032
42. Yilmaz R and Karaoglan Yilmaz FG. Problematic internet use in adults: the role of happiness, psychological resilience, dispositional hope, and self-control and self-management. J Rational-Emotive Cognitive-Behavior Ther. (2023) 41:727–45. doi: 10.1007/s10942-022-00482-y
43. Thaler RH and Sunstein CS. Nudge: The final edition. New York, United States of America: Penguin (2021).
44. Sunstein CR. Nudging: A very short guide. J Consumer Policy. (2014) 37:583–8. doi: 10.1007/s10603-014-9273-1
45. Hansen PG. The definition of nudge and libertarian paternalism: Does the hand fit the glove? Eur J Risk Regul. (2016) 7:155–74. doi: 10.1017/S1867299X00005468
46. Tagliabue M and Sandaker I. Societal well-being: Embedding nudges in sustainable cultural practices. Behav Soc Issues. (2019) 28:99–113. doi: 10.1007/s42822-019-0002-x
47. Rachlin H. Choice architecture: A review of Why nudge: The politics of libertarian paternalism. J Exp Anal Behav. (2015) 104:198–203. doi: 10.1002/jeab.163
48. Simon C and Tagliabue M. Feeding the behavioral revolution: Contributions of behavior analysis to nudging and vice versa. J Behav Economics Policy. (2018) 2:91–7. http://sabeconomics.org/wordpress/wp-content/uploads/JBEP-2-1-13.pdf.
49. Andersen F and Dechsling A. Nudging, hva er det og bør vi benytte oss av det? Norsk Tidsskrift Atferdsanalyse. (2021) 48:325–34. Available at: https://nta.atferd.no/loadfile.aspx?IdFile=2036
50. Cooper JO, Heron TE, and Heward WL. Applied behavior analysis. 3rd ed. London, United Kingdom: Pearson Education (2020).
51. Green G and Saunders RR. Stimulus equivalence. I K. A. Lattal, & M. Perone (Red.), Handbook of researchmethods in human operant behavior (s. 229–262). Plenum (1998).
52. Weinmann M, Schneider C, and vom Brocke J. Digital nudging. Business Inf Syst Eng (BISE). (2016) 58:433–6. doi: 10.1007/s12599-016-0453-1
53. Duane JN, Ericson J, and McHugh P. Digital nudges: a systematic narrative review and taxonomy. Behav Inf Technol. (2024), 1–21. doi: 10.1080/0144929X.2024.2440116
54. Schneider LD, Barakat A, Ali Z, Concepcion C, Taylor JA, and Jiang A. Pilot study of personalized sleep-coaching messages to promote healthy sleeping behaviors. Front Sleep. (2023) 1:1071822. doi: 10.3389/frsle.2022.1071822
55. Kadura S, Eisner L, Lopa SH, Poulakis A, Mesmer H, Willnow N, et al. Nudging towards Sleep-Friendly Health Care: A Multifaceted Approach on Reducing Unnecessary Overnight Interventions. Appl Clin Inf. (2024) 15:1025–39. doi: 10.1055/a-2404-2344
56. Zimmermann L and Sobolev M. Digital strategies for screen time reduction: A randomized field experiment. Cyberpsychology, Behavior and Social Networking. (2023) 26(1):42–9. doi: 10.1089/cyber.2022.0027
57. Caraban A, Karapanos E, Gonçalves D, and Campos P. (2019). 23 ways to nudge: A review of technology-mediated nudging in human-computer interaction, in: Proceedings of the 2019 CHI conference on human factors in computing systems, Glasgow, Scotland Uk. pp. 1–15. doi: 10.1145/3290605.3300733
58. Bergram K, Bezençon V, Maingot P, Gjerlufsen T, and Holzer A. (2020). “Digital Nudges for Privacy Awareness: From consent to informed consent?” In Proceedings of the 28th European Conference on Information Systems (ECIS), An Online AIS Conference. June 15-17, 2020. Available at: https://aisel.aisnet.org/ecis2020_rp/64.
59. Purohit AK, Barclay L, and Holzer A. (2020). Designing for digital detox: Making social media less addictive with digital nudges, in: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA. pp. 1–9. doi: 10.1145/3334480.3382810
60. Okeke F, Sobolev M, Dell N, and Estrin D. (2018). Good vibrations: can a digital nudge reduce digital overload?, in: Proceedings of the 20th International Conference on Human-computer Interaction with Mobile Devices and Services, Barcelona, Spain. pp. 1–12. doi: 10.1145/3229434.3229463
61. Wang Y, Leon PG, Acquisti A, Cranor LF, Forget A, and Sadeh N. (2014). A field trial of privacy nudges for facebook, in: Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Toronto, Ontario, Canada. pp. 2367–76. doi: 10.1145/2556288.2557413
63. Hansen PG and Jespersen AM. Nudge and the manipulation of choice: A framework for the responsible use of the nudge approach to behaviour change in public policy. Eur J Risk Regul. (2013) 4:3–28. doi: 10.1017/S1867299X00002762
64. Purohit AK, Barev TJ, Schöbel S, Janson A, and Holzer A. (2023). Designing for digital wellbeing on a smartphone: co-creation of digital nudges to mitigate instagram overuse, in: Proceedings of the 56th Hawaii International Conference on System Sciences, . pp. 4087–96. Available at: https://hdl.handle.net/10125/103130.
66. Busch PA, Hausvik GI, Ropstad OK, and Pettersen D. Smartphone Usage Among Older Adults. Comput Hum Behav. (2021) 121.
67. Holovchak B. Smartphone and generations: The central role of smartphones in wartime–From the perspectives of Generation Z, Generation X, and the Baby Boomer generation in the context of the Russian-Ukrainian war. In: Flaherty-Echeverría S, Haring N, and Maierhofer S, editors. Conflict, challenge, and change: State–society–religion. Graz, Austria: Center for Inter-American Studies, University of Graz (2024). p. 101–21. doi: 10.25364/25.10:2024.0
68. Zitting KM, Münch MY, Cain SW, Wang W, Wong A, Ronda JM, et al. Young adults are more vulnerable to chronic sleep deficiency and recurrent circadian disruption than older adults. Sci Rep. (2018) 8:11052. doi: 10.1038/s41598-018-29358-x
69. Andrade C. The inconvenient truth about convenience and purposive samples. Indian J psychol Med. (2021) 43:86–8. doi: 10.1177/0253717620977000
70. Dumas-Mallet E, Button KS, Boraud T, Gonon F, and Munafò MR. Low statistical power in biomedical science: a review of three human research domains. R Soc Open Sci. (2017) 4:160254. doi: 10.1098/rsos.160254
71. Wu CH. An empirical study on the transformation of Likert-scale data to numerical scores. Appl Math Sci. (2007) 1:2851–62.
72. Buysse DJ, Reynolds CF, Monk TH, Berman SR, and Kupfer DJ. The Pittsburgh sleep quality index: A new instrument for psychiatric practice and research. Psychiatry Res. (1989) 28:193–213. doi: 10.1016/0165-1781(89)90047-4
73. Ward AF, Duke K, Gneezy A, and Bos MW. Brain drain: the mere presence of one’s own smartphone reduces available cognitive capacity. Journal Assoc Consumer Research . (2017) 2:140–54. doi: 10.1086/691462
74. Joshi A, Kale S, Chandel S, and Pal DK. Likert scale: Explored and explained. Br J Appl Sci Technol. (2015) 7:396. doi: 10.9734/BJAST/2015/14975
75. Bishara AJ and Hittner JB. Testing the significance of a correlation with nonnormal data: Comparison of Pearson, Spearman, transformation, and resampling approaches. psychol Methods. (2012) 17:399–417. doi: 10.1037/a0028087
76. Knief U and Forstmeier W. Violating the normality assumption may be the lesser of two evils. Behav Res Methods. (2021) 53:2576–90. doi: 10.3758/s13428-021-01587-5
77. Skaik Y. The bread and butter of statistical analysis "t-test": Uses and misuses. Pakistan J Med Sci. (2015) 31:1558–9. doi: 10.12669/pjms.316.8984
78. Cohen J. Statistical power analysis for the behavioral sciences. Oxfordshire, United Kingdom: Routledge Academic (1988).
79. Sullivan GM and Feinn R. Using effect size-or why the p value is not enough. J Graduate Med Educ. (2012) 4:279–82. doi: 10.4300/JGME-D-12-00156.1
80. Maher JM, Markey JC, and Ebert-May D. The other half of the story: Effect size analysis in quantitative research. CBE Life Sci Educ. (2013) 12:345–51. doi: 10.1187/cbe.13-04-0082
81. Bowman ND. The importance of effect size reporting in Communication Research Reports. Communication Res Rep. (2017) 34:187–90. doi: 10.1080/08824096.2017.1353338
83. Hummel D and Maedche A. How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies. J Behav Exp Economics. (2019) 80:47–58. doi: 10.1016/j.socec.2019.03.005
84. Hayes SC and Brownstein AJ. Mentalism, behavior-behavior relations, and a behavior-analytic view of the purposes of science. Behav Analyst. (1986) 9:175–90. doi: 10.1007/BF03391944
85. Banerjee S and John P. Nudge plus: incorporating reflection into behavioral public policy. Behav Public Policy. (2024) 8:69–84. doi: 10.1017/bpp.2021.6
86. Martin G and Pear J. Behavior modification: What it is and how to do it. 6th ed. New Jersey, USA: Prentice-Hall (1999).
87. Olson JA, Sandra DA, Chmoulevitch D, Raz A, and Veissière SPL. A nudge-based intervention to reduce problematic smartphone use: Randomised controlled trial. Int J Ment Health Addict. (2023) 21:3842–64. doi: 10.1007/s11469-022-00826-w
88. Nayak BK. Understanding the relevance of sample size calculation. Indian J Ophthalmol. (2010) 58:469–70. doi: 10.4103/0301-4738.71673
89. Faber J and Fonseca LM. How sample size influences research outcomes. Dental press J orthodontics. (2014) 19:27–9. doi: 10.1590/2176-9451.19.4.027-029.ebo
90. Eysenbach G. Improving the quality of web surveys: the checklist for reporting results of internet E-surveys (CHERRIES) [Editorial. J Med Internet Res. (2004) 6:e34. doi: 10.2196/jmir.6.3.e34
91. Yi H, Shin K, and Shin C. Development of the sleep quality scale. J sleep Res. (2006) 15:309–16. doi: 10.1111/j.1365-2869.2006.00544.x
92. Dzierzewski JM, Donovan EK, and Sabet SM. The Sleep Regularity Questionnaire: Development and initial validation. Sleep Med. (2021) 85:45–53. doi: 10.1016/j.sleep.2021.06.028
93. Sohn SY, Krasnoff L, Rees P, Kalk NJ, and Carter B. The association between smartphone addiction and sleep: a UK cross-sectional study of young adults. Front Psychiatry. (2021) 176:629407. doi: 10.3389/fpsyt.2021.629407
94. Sidman M. Tactics of scientific research: Evaluating experimental data in psychology. Basic Books. (1960).
95. Branch M. Lessons worth repeating: Sidman’s tactics of scientific research. J Exp Anal Behav. (2021) 115:44–55. doi: 10.1002/jeab.643
96. Zhang Y, Fan S, Hui H, Zhang N, Li J, Liao L, et al. Privacy protection for open sharing of psychiatric and behavioral research data: Ethical considerations and recommendations. Alpha Psychiatry. (2025) 26(1):38759. doi: 10.31083/ap38759
97. Polman E and Maglio SJ. Will your nudge have a lasting impact (2024). Available online at: https://hbr.org/2024/04/will-your-nudge-have-a-lasting-impact (Accessed on February 2, 2025).
98. Kekic M, McClelland J, Bartholdy S, Chamali R, Campbell IC, and Schmidt U. Bad things come to those who do not wait: Temporal discounting is associated with compulsive overeating, eating disorder psychopathology and food addiction. Front Psychiatry. (2020) 10:978. doi: 10.3389/fpsyt.2019.00978
99. Herzog SM and Hertwig R. Boosting: empowering citizens with behavioral science. Annu Rev Psychol. (2025) 76:851–81. doi: 10.1146/annurev-psych-020924-124753
100. Lembcke TB, Engelbrecht N, Brendel AB, and Kolbe L. (2019). To nudge or not to nudge: Ethical considerations of digital nudging based on its behavioral economics roots, in: Proceedings of the 27th European conference on Information Systems, Stockholm-Uppsala, Sweden. pp. 1–17. New Jersey, USA. Available at: https://www.researchgate.net/publication/333421600_To_Nudge_or_Not_To_Nudge_Ethical_Considerations_of_Digital_Nudging_Based_on_Its_Behavioral_Economics_Roots.
101. Clavien C. Ethics of nudges: A general framework with a focus on shared preference justifi-cations“. J Moral Educ. (2018) 47:366–82. Available at: https://www.researchgate.net/publication/333421600_To_Nudge_or_Not_To_Nudge_Ethical_Considerations_of_Digital_Nudging_Based_on_Its_Behavioral_Economics_Roots (Accessed July 02, 2025).
Keywords: screen time, digital addiction, digital nudge, sleep quality, smartphone
Citation: Vu TH and Tagliabue M (2025) Active nudging towards digital well-being: reducing excessive screen time on mobile phones and potential improvement for sleep quality. Front. Psychiatry 16:1602997. doi: 10.3389/fpsyt.2025.1602997
Received: 30 March 2025; Accepted: 20 June 2025;
Published: 17 July 2025.
Edited by:
Yibo Wu, Peking University, ChinaReviewed by:
Manuel Martí-Vilar, University of Valencia, SpainIker Sáez, University of Deusto, Spain
Sandra Figueiredo, Autonomous University of Lisbon, Portugal
Mira Fauth-Bühler, FOM University of Applied Sciences for Economics and Management, Germany
Thalles Guilarducci Costa, State University of Goiás, Brazil
Copyright © 2025 Vu and Tagliabue. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Marco Tagliabue, bWFyY28udGFnbGlhYnVlQG9zbG9tZXQubm8=