Abstract
Background:
Artificial Intelligence (AI) is rapidly reshaping pediatric care. While AI-driven clinical screening tools have demonstrated significant value in the early identification of neurodevelopmental risks (e.g., dyslexia, autism), a parallel trend of continuous, consumer-grade quantification of neurotypical children is emerging.
Problem:
This paper critically evaluates the “Quantified Child” paradigm—defined as the use of consumer technologies for continuous physiological and behavioral tracking. We argue that unlike targeted clinical interventions, this pervasive surveillance approach carries systemic risks: it may induce “technoference” in parent-child interactions, amplify caregiver performance anxiety, and trigger iatrogenic risks through false-positive labeling.
Proposal:
Drawing on developmental science, we propose shifting to a “Supporting the Caregiver” paradigm for non-clinical settings. In this model, AI functions as an administrative assistant to automate family logistics and reduce cognitive load, rather than a digital intermediary for monitoring the child.
Mechanism:
We posit that reducing caregiver stress and saving effective time serve as crucial proximal mediators. By improving the caregiver's psychological well being, AI indirectly protects the quality of responsive parenting, which is the definitive driver of positive child developmental outcomes in early childhood.
Conclusion:
A paradigm shift from direct child quantification to caregiver support offers a more robust and ethical technological pathway, ensuring that AI serves to enrich, rather than displace, the human connections essential for early development.
1 Introduction
Artificial Intelligence (AI) technology is rapidly penetrating the ecosystem of child development. From intelligent baby monitors to developmental tracking apps, digital tools are continuously reshaping parenting practices. Crucially, however, we must distinguish between two fundamentally different applications of AI in this domain.
On one hand, AI-driven clinical screening tools have demonstrated immense value in the early identification of neurodevelopmental risks (e.g., dyslexia, autism) within professional settings (Achenie et al., 2019; Svaricek et al., 2025). These tools are beneficial because they are targeted, episodic, and evidence-based. However, a concerning parallel trend is the generalization of this medical logic into the daily lives of neurotypical children through consumer-grade technologies. We term this emerging paradigm the “Quantified Child,” which aims to achieve fine-grained tracking and optimization of developmental trajectories by continuously collecting data on children's physiological and behavioral performance.
A profound tension exists between this technological path of “continuous quantification” and the core principles of developmental science, particularly in early childhood (0–6 years). Long-term evidence from developmental science repeatedly emphasizes that high-quality, stable parent-child interactions and a supportive family environment are the cornerstones of children's social-emotional competence and overall well being (Balaj et al., 2021; Egeland et al., 1990). This evidence suggests that the primary driver of development is the quality of the relationship, not the quantity of data. Consequently, any technology that inserts itself as a “digital intermediary” between parent and child—potentially causing “technoference” or performance anxiety—must be critically examined.
Existing research has fully confirmed that caregivers' psychological states and family adversity are significant risk factors, but it has also demonstrated that the quality of family interaction is significantly malleable and intervenable (Adjei et al., 2024). This fact provides a pivotal perspective for rethinking technology: the value of AI may not lie in directly “optimizing” the child, but in indirectly “supporting” the caregiver.
Therefore, the objectives of this paper are twofold. First, based on an integrated evidence base, we conduct a structured risk assessment of the “Quantified Child” paradigm in the consumer sector, arguing that its inherent risks—such as the disruption of responsive parenting—outweigh its benefits for neurotypical children. Second, we propose an alternative paradigm centered on “Supporting the Caregiver.” We argue that shifting the technological focus from monitoring children to reducing caregiver stress and logistical burden represents a more ethical and effective path, aligning technological capabilities with the biological realities of early child development.
2 Systemic risks of the “quantified child” paradigm
AI applications that directly source data from children for continuous quantitative assessment are inherently designed to produce a series of foreseeable, systemic negative effects. These effects are not “side effects” that can be eliminated through technological-optimization, but are innate flaws of the paradigm, mainly reflected in the following four aspects.
2.1 Increased caregiver psychological stress
By continuously providing a comparison of a child's individual data with standardized norms, the feedback mechanism of quantitative AI inevitably intensifies the performance-based pressure on caregivers in a sociocultural context of high-investment parenting. Research has clearly indicated that parenting stress is a key variable affecting the quality of family relationships and children's behavioral and emotional outcomes, and that psychological support for parents can effectively reduce their stress levels, proving the intervenable nature of this pathway (Egeland et al., 1990; Mo et al., 2024). When a technological system constantly shapes parents' attention and parenting goals with alerts of “deviation from the norm,” parents are highly likely to invest their limited time and energy in the short-term “optimization” of data metrics. This data-driven anxiety, through emotional exhaustion and controlling interactions, ultimately reduces the quality of parent-child interaction, thus posing an indirect but continuous adverse effect on the child's emotional security.
2.2 Narrowing of key developmental areas
The effectiveness of algorithms is naturally limited to variables that are easy to operationalize and digitize, such as a child's vocabulary size, sleep duration, or screen time. However, classic theories of developmental science, such as the ecological systems theory, repeatedly emphasize the decisive role of multi-level systems and relational processes in development. These factors include the quality of parent-child interaction, family cohesion, and community support, which are inherently difficult to quantify precisely (Brooks-Gunn et al., 2000). When the resources and attention of parents and society are excessively absorbed by a few measurable indicators, it may come at the cost of time spent on high-quality interactions, joint reading, and outdoor activities. This phenomenon can be termed “measurability bias,” and it leads to an increase in parenting opportunity costs and a severe narrowing of the developmental focus.
2.3 Introduction of labeling and iatrogenic risks
In complex real-world scenarios, algorithmic false positives and uncertainty are technologically inherent and difficult to eradicate. A wrong label, even if it is just a probabilistic risk alert, can have a profound and negative “imprinting effect” on a child's self-concept and a parent's nurturing decisions. Evidence from the field of pediatric screening clearly indicates that false-positive results can significantly increase parental anxiety and trigger a chain of unnecessary medical consultations and over-interventions, which constitutes a typical iatrogenic risk (O'Leary et al., 2024). When a probabilistic risk alert is misinterpreted by a non-professional end-user (the parent) as a definitive diagnostic label, it can fundamentally change their interaction patterns with the child and the allocation of family resources, ultimately amplifying avoidable medical burdens and psychological stress. For example, the American Academy of Pediatrics (AAP) explicitly stated in its 2022 policy statement that it does not recommend consumer-grade cardiorespiratory monitors as a means to reduce the risk of Sudden Infant Death Syndrome (SIDS), with the core logic being to prevent the misleading sense of security they provide from replacing truly effective evidence-based guidance and environmental modifications (Moon et al., 2022).
2.4 Disruption of responsive parenting: the “technoference” effect
Perhaps the most profound risk is the interference with responsive parenting—the cornerstone of early childhood development. This mechanism relies on the caregiver's sensitive, timely, and contingent feedback to the child's signals (Olson et al., 1990). However, the “Quantified Child” paradigm inserts a digital interface between parent and child. Recent empirical studies on “technoference” (technology-based interference) suggest that frequent checking of devices disrupts the natural “serve and return” interaction patterns essential for brain development (McDaniel and Radesky, 2018a,b). When caregivers rely on a “digital intermediary” to interpret their child's needs (e.g., checking an app to see if the baby is hungry rather than reading the baby's cues), it risks “de-skilling” their intuitive parenting abilities. The caregiver's attention is fractured, shifting from the child's face to the data screen, thereby degrading the quality of the immediate, embodied connection that is critical for secure attachment.
3 The alternative paradigm: AI as a caregiver support system
Based on the systematic deconstruction of the risks associated with continuous quantification—particularly the risks of technoference and iatrogenic anxiety—this paper argues that the application of consumer-grade AI in early childhood must undergo a fundamental shift. We propose an alternative paradigm: moving from “Quantifying the Child” to “Supporting the Caregiver.”
The theoretical foundation of this paradigm shift lies in the long-standing consensus of developmental science: high-quality parent-child interaction and a supportive family environment are the strongest predictors of a child's healthy development. As emphasized by ecological systems theory, the synergistic effects of multi-level systems—including parents, schools, and communities—jointly shape an individual's developmental trajectory, rather than isolated biological metrics (Brooks-Gunn et al., 2000).
Crucially, this paradigm does not ignore the child; rather, it prioritizes the indirect pathway to child well being. Solid intervention evidence supports this logic: Meta-analyses show that interventions focused on parents (such as parent training) can significantly reduce parenting stress (Standardized Mean Difference, SMD = −0.38, 95% CI: −0.49 to −0.27) and improve the quality of parent-child interaction (Shah et al., 2022). Furthermore, technology-assisted interventions, such as video feedback, have been proven to effectively enhance caregiver sensitivity (SMD = 0.34, 95% CI: 0.20–0.49), directly translating into better attachment security for the child (O'Hara et al., 2019).
Therefore, the primary value of AI should not be to monitor the child as a passive subject, but to empower the adult as an active agent. By automating administrative burdens and providing just-in-time, stress-reducing support, AI can protect the “relational space” needed for responsive parenting.
From an ethical perspective, shifting the point of technological intervention from the vulnerable child to the autonomous adult (the caregiver) adheres more strictly to the “do not harm” principle. This approach avoids the risks of labeling and medicalizing normal childhood variations while maximizing the benefits of evidence-based environmental optimization. Thus, the “Support the Caregiver” paradigm is not merely a risk-mitigation strategy, but a scientifically grounded pathway to optimize child outcomes by nurturing the nurturer.
4 Operational definition and boundaries of the alternative paradigm
The core principle of this alternative paradigm is that AI should be designed to support the human caregiving system rather than to replace human judgment with algorithmic data. To ensure ethical application, it is essential to establish strict operational boundaries based on the setting of use, distinguishing clearly between consumer-oriented home environments and professional clinical settings.
4.1 Application for parents: an administrative and logistical support system
For parents of neurotypical children, AI should function as a “Family Affairs Assistant” aimed at reducing cognitive load and time costs, rather than a “Developmental Monitor.” The primary objective is to liberate parents from tedious transactional work—such as family meal planning, schedule coordination, and the personalized retrieval of reliable parenting knowledge—allowing them to reinvest their time in high-quality, face-to-face interactions. By automating these logistical tasks, technology serves to minimize “technoference” and protect the relational space between parent and child.
Conversely, this paradigm strictly prohibits the collection, analysis, or scoring of a child's physiological or behavioral data in a consumer context. The system should not include functions for continuous child development tracking or peer ranking, which often serve to amplify parental anxiety. Furthermore, in alignment with the evidence-based recommendations of the American Academy of Pediatrics (AAP), this paradigm explicitly opposes the promotion of consumer-grade wearable devices for routine vital sign monitoring in healthy infants, as these tools often provide a misleading sense of security and trigger unnecessary iatrogenic alarm (Moon et al., 2022).
4.2 Application for clinicians: a deeply integrated decision support system
In contrast to the consumer domain, the “Support the Caregiver” paradigm within professional settings extends to supporting the professional caregiver—the pediatrician or therapist. In this specific context, quantification and automated screening are both permissible and highly valuable, provided they are designed to enhance rather than replace clinical expertise.
First, AI can serve as a powerful tool for pre-visit screening and information integration. By analyzing structured data—such as standardized developmental questionnaires or targeted screening tools for conditions like dyslexia (e.g., EarlyBird)—AI can efficiently identify high-risk markers before a consultation. This application leverages the computational strengths of AI to ensure early identification, directly addressing the critical need for timely intervention in potential neurodevelopmental disorders.
Second, AI should be deeply integrated into the clinical workflow to function as an efficiency partner. By automating administrative tasks—such as extracting key information from Electronic Health Records (EHR), transcribing consultations in real-time, and drafting standardized medical documentation—AI allows clinicians to redirect their attention from the computer screen back to the patient. However, it is imperative that the physician remains the “human-in-the-loop.” All AI-generated insights must be presented strictly as decision support, ensuring that the final interpretation of a child's developmental status relies on professional human judgment rather than algorithmic output alone.
5 Validation standards for the alternative paradigm's effectiveness
The evaluation of AI tools under the “Support the Caregiver” paradigm requires a fundamental shift in validation methodology. We acknowledge that the ultimate “gold standard” for any pediatric intervention remains the improvement of child developmental outcomes. However, obtaining these outcomes through continuous digital surveillance introduces the very risks of labeling and technoference we seek to avoid. Therefore, we propose a hierarchical validation framework that distinguishes between proximal mediators (caregiver metrics) and distal outcomes (child metrics), verifying the efficacy of AI through an indirect but causal pathway.
The primary level of validation should focus on proximal mediators: specifically, “effective time saved” and “reduction in caregiver physiological and psychological stress.” Unlike the “Quantified Child” paradigm which views these metrics as secondary user experience factors, this new paradigm posits them as critical clinical mechanisms. The logic is that by offloading cognitive drudgery and logistical burdens, AI preserves the caregiver's emotional capacity. Effectiveness should be measured by the reallocation of this saved time: validation must confirm that the time “freed” by AI is not displaced by screen time, but is reinvested in high-quality, face-to-face parent-child interactions.
The secondary, and more critical, level of validation is the demonstrable improvement in distal child development outcomes. We argue that while the AI tool itself should not monitor the child, the scientific validation of the tool must verify its downstream benefits. Researchers should employ Randomized Controlled Trials (RCTs) using a mediation analysis model to prove that the reduction in parental stress (facilitated by AI) statistically mediates improvements in the child's secure attachment and social-emotional competence. For instance, standardized clinical assessments (administered by professionals, not apps) should demonstrate that children in families using “Supportive AI” exhibit better developmental trajectories compared to controls, not because the machine optimized the child, but because the machine successfully nurtured the family environment.
6 Discussion
The central argument of this paper is that the application of AI in early childhood development faces a fundamental paradigm choice. This is not merely a technical optimization problem, but a question of values: should technology serve to “Quantify the Child” or to “Support the Caregiver”?
6.1 The fallacy of generalized quantification
The “Quantified Child” paradigm represents an extension of the engineering mindset—pursuing efficiency and certainty—into the organic realm of parenting. While this reductionist approach promises objective control, it fundamentally conflicts with the holistic and relational nature of early childhood. Crucially, our critique focuses on the generalization of this paradigm into the consumer market for neurotypical children. A common counterargument, often raised by proponents of precision medicine, is that early quantitative screening is essential for identifying risks in neurodevelopmental disorders. We fully agree with this premise. However, the value of professional, targeted screening tools (e.g., for dyslexia or autism) does not justify the continuous, invasive surveillance of healthy children in their homes. The danger lies in the “medicalization of everyday life,” where the logic of the clinic—constant monitoring for defects—displaces the logic of the home, which should be centered on unconditional acceptance and connection.
6.2 The mediation mechanism: supporting the parent to save the child
In contrast, the “Supporting the Caregiver” paradigm represents a philosophy of prudent technological application. It acknowledges that for the first 6 years of life, the most sophisticated “developmental support system” is not an algorithm, but a responsive human adult. Therefore, the mechanism of this paradigm is indirect but potent: by acting as a “buffer” against administrative stress and logistical chaos, AI protects the caregiver's psychological resources. This aligns with the “Mediation Model” of family intervention: AI reduces parental stress, which in turn reduces negative “technoference” behaviors, ultimately preserving the quality of parent-child interaction that drives child outcomes.
6.3 Implementation challenges and governance
Implementing this paradigm faces significant challenges, primarily from business models incentivized by user engagement and data collection. To correct this market failure, we need a multi-level governance framework ensuring the Safety, Equity, Effectiveness, and Trustworthiness (SEET principles) of technology deployment (Rozenblit et al., 2025). Key governance priorities must include: (1) Strict Data Minimization: Prohibiting the collection of unnecessary child data in non-clinical apps; (2) Human-in-the-Loop: Ensuring that all AI decisions involving child welfare maintain human professionals as the final authority; and (3) Post-Market Surveillance: Establishing mechanisms to address model drift and bias over time (Husain et al., 2024). Furthermore, voluntary certification frameworks could incentivize the development of “privacy-enhancing” applications, similar to standards emerging in wearable body-fluid analysis (Brasier et al., 2024), but adapted for the digital well being of the family.
6.4 Future research directions
Future research should focus on empirically testing this alternative paradigm. We call for Randomized Controlled Trials (RCTs) that compare “Quantified Child” apps against “Caregiver Support” tools. Crucially, these studies must measure the “Technoference Effect”: tracking not just whether the app works, but whether using the app physically and attentionally displaces parent-child interaction. Only by including these ecological metrics can we truly assess the net impact of AI on family well being.
7 Conclusion
This paper has conducted a systematic risk assessment of the mainstream application of artificial intelligence in early childhood development. Our analysis reveals that the “Quantified Child” paradigm—when applied to consumer-grade surveillance of neurotypical children—harbors systemic flaws. It risks increasing caregiver performance anxiety, narrowing developmental focus, and inducing “technoference” that disrupts core parent-child interaction mechanisms.
Accordingly, we conclude that a responsible path for AI requires a fundamental paradigm shift. The industry must pivot from developing tools that “watch” the child to developing tools that “serve” the caregiver. By strictly defining AI as an auxiliary administrative support system, we can harness its computational power to reduce the burdens of modern parenting without sacrificing the human connection. Ultimately, the goal of AI in early childhood should not be to optimize data points, but to protect the unquantifiable, interpersonal moments that constitute the foundation of human development.
Statements
Author contributions
YX: Methodology, Conceptualization, Project administration, Data curation, Writing – review & editing, Investigation, Writing – original draft, Software. HC: Writing – original draft, Supervision, Software, Writing – review & editing, Validation, Methodology, Project administration. M-MW: Investigation, Writing – review & editing, Project administration, Writing – original draft, Data curation, Methodology. YZ: Project administration, Writing – review & editing, Data curation, Writing – original draft, Methodology, Resources. H-TC: Writing – original draft, Formal analysis, Methodology, Writing – review & editing, Data curation. FL: Project administration, Supervision, Validation, Methodology, Writing – original draft, Writing – review & editing.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This work was supported by National Natural Science Foundation of China Youth Fund (82205190), China Postdoctoral Science Foundation General Project (2023M731027), Special Grant from China Postdoctoral Science Foundation, (2024T170253), Henan Province Postdoctoral Project (HN2022096) and the Science and Technology Research Program of Chongqing Municipal Education Commission (Grant No.KJQN202315133); Special funding for post-doctoral research project of Chongqing Municipal Human Resources and Social Security Bureau (2022CQBSHTB1029); Sponsored by Natural Science Foundation of Chongqing (cstc2021jcyj-msxmX0505).
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
Achenie L. E. K. Scarpa A. Factor R. S. Wang T. Robins D. L. McCrickard D. S. et al . (2019). A machine learning strategy for autism screening in toddlers. J. Dev. Behav. Pediatr.40, 369–376. doi: 10.1097/DBP.0000000000000668
2
Adjei N. K. Jonsson K. R. Straatmann V. S. Melis G. McGovern R. Kaner E. et al . (2024). Impact of poverty and adversity on perceived family support in adolescence: findings from the UK millennium cohort study. Eur. Child Adolesc. Psychiat.33, 3123–3132. doi: 10.1007/s00787-024-02389-8
3
Balaj M. York H. W. Sripada K. Besnier E. Vonen H. D. Aravkin A. et al . (2021). Parental education and inequalities in child mortality: a global systematic review and meta-analysis. Lancet (London, England).398, 608–620. doi: 10.1016/S0140-6736(21)00534-1
4
Brasier N. Wang J. Gao W. Sempionatto J. R. Dincer C. Ates H. C. et al . (2024). Applied body-fluid analysis by wearable devices. Nature.636, 57–68. doi: 10.1038/s41586-024-08249-4
5
Brooks-Gunn J. Berlin L. J. Leventhal T. Fuligni A. S. (2000). Depending on the kindness of strangers: current national data initiatives and developmental research. Child Dev.71, 257–268. doi: 10.1111/1467-8624.00141
6
Egeland B. Kalkoske M. Gottesman N. Erickson M. F. (1990). Preschool behavior problems: stability and factors accounting for change. J. Child Psychol. Psychiat.31, 891–909. doi: 10.1111/j.1469-7610.1990.tb00832.x
7
Husain A. Knake L. Sullivan B. Barry J. Beam K. Holmes E. et al . (2024). AI models in clinical neonatology: a review of modeling approaches and a consensus proposal for standardized reporting of model performance. Pediatr. Res.98, 412–422. doi: 10.1038/s41390-024-03774-4
8
McDaniel B. T. Radesky J. S. (2018a). Technoference: longitudinal associations between parent technology use, parenting stress, and child behavior problems. Pediatr. Res.84, 210–218. doi: 10.1038/s41390-018-0052-6
9
McDaniel B. T. Radesky J. S. (2018b). Technoference: parent distraction with technology and associations with child behavior problems. Child Dev.89, 100–109. doi: 10.1111/cdev.12822
10
Mo S. Bu F. Bao S. Yu Z. (2024). Comparison of effects of interventions to promote the mental health of parents of children with autism: a systematic review and network meta-analysis. Clin. Psychol. Rev.114:102508. doi: 10.1016/j.cpr.2024.102508
11
Moon R. Y. Carlin R. F. Hand I. Task Force on Sudden Infant Death Syndrome the Committee on Fetus Newborn. (2022). Sleep-related infant deaths: updated 2022 recommendations for reducing infant deaths in the sleep environment. Pediatrics.150:e2022057990. doi: 10.1542/peds.2022-057990
12
O'Hara L. Smith E. R. Barlow J. Livingstone N. Herath N. I. Wei Y. et al . (2019). Video feedback for parental sensitivity and attachment security in children under five years. Cochrane Database Syst. Rev.11:CD012348. doi: 10.1002/14651858.CD012348.pub2
13
O'Leary A. Lahey T. Lovato J. Loftness B. Douglas A. Skelton J. et al . (2024). Using wearable digital devices to screen children for mental health conditions: ethical promises and challenges. Sensors (Basel, Switzerland). 24:3214. doi: 10.3390/s24103214
14
Olson S. L. Bates J. E. Bayles K. (1990). Early antecedents of childhood impulsivity: the role of parent-child interaction, cognitive competence, and temperament. J. Abnorm. Child Psychol.18, 317–334. doi: 10.1007/BF00916568
15
Rozenblit L. Price A. Solomonides A. Joseph A. L. Koski E. Srivastava G. et al . (2025). Toward responsible AI governance: balancing multi-stakeholder perspectives on AI in healthcare. Int. J. Med. Inform.203:106015. doi: 10.1016/j.ijmedinf.2025.106015
16
Shah R. Camarena A. Park C. Martin A. Clark M. Atkins M. et al . (2022). Healthcare-based interventions to improve parenting outcomes in LMICs: a systematic review and meta-analysis. Matern Child Health J.26, 1217–1230. doi: 10.1007/s10995-022-03445-y
17
Svaricek R. Dostalova N. Sedmidubsky J. Cernek A. (2025). INSIGHT combining fixation visualisations and residual neural networks for dyslexia classification from eye-tracking data. Dyslexia.31:e1801. doi: 10.1002/dys.1801
Summary
Keywords
artificial intelligence, caregiver support, child development, paradigm evaluation, quantified self, technology ethics
Citation
Xu Y, Chen H, Wei M-M, Zhang Y, Cui H-T and Liu F (2026) From “quantifying the child” to “supporting the caregiver”: a paradigm evaluation and ethical pathway selection for AI applications in child development. Front. Psychol. 17:1682555. doi: 10.3389/fpsyg.2026.1682555
Received
13 August 2025
Revised
09 January 2026
Accepted
26 January 2026
Published
17 February 2026
Volume
17 - 2026
Edited by
Hyunju Kim, Northeastern University, United States
Reviewed by
Karlis Kanders, Nesta, United Kingdom
Updates
Copyright
© 2026 Xu, Chen, Wei, Zhang, Cui and Liu.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Hong-Tao Cui, dr.cuihongtao@qq.com; Fang Liu, lfxy_89@qq.com
†These authors have contributed equally to this work and share first authorship
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.