<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Artificial Intelligence | Technology and Law section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/artificial-intelligence/sections/technology-and-law</link>
        <description>RSS Feed for Technology and Law section in the Frontiers in Artificial Intelligence journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-13T16:21:01.335+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1716108</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1716108</link>
        <title><![CDATA[Between innovation and risk: artificial intelligence and data protection in digital Mexico]]></title>
        <pubdate>2026-05-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Emilio J. Medrano-Sánchez</author><author>Elizabeth Ruiz-Ramírez</author><author>Mariela L. Ayllon</author>
        <description><![CDATA[At a global level, the rapid adoption of artificial intelligence (AI) brings significant risks to data privacy, prompting the development of legal frameworks. In Latin America, including Mexico, where such frameworks remain emergent, the issue gains particular relevance. This study analyzed how perceptions of AI are associated with perceptions of personal data protection among university-educated professionals residing in Mexico City in 2025, based on privacy theory, the Technology Acceptance Model (TAM), and their extensions. A quantitative, non-experimental, correlational, and cross-sectional approach was employed. Data were collected from 101 university-educated professional participants residing in Mexico City using an expert-validated, 24-item Likert-type questionnaire. Data analysis was conducted in SPSS using non-parametric correlation tests (Spearman and Kendall). Results indicated that a more favorable perception of AI was associated with more favorable perceptions of personal data protection and its dimensions. The correlations demonstrated a moderate and significant positive association (p < 0.001) between perceptions of AI and overall perceptions of data protection. Furthermore, significant positive correlations were found across each evaluated dimension, thus confirming the four specific hypotheses. From a theoretical perspective, the findings suggest that contextual factors modulate the AI-privacy relationship, contributing a Latin American perspective to the literature. On a practical and social level, recommendations aligned with SDG 16 include strengthening institutional frameworks (regulation, transparency, digital education), fostering public-private collaboration, and promoting digital oversight and literacy to achieve informed trust.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1762748</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1762748</link>
        <title><![CDATA[AI spring and its regulation discourse: a bibliometric study of trends in literature]]></title>
        <pubdate>2026-05-07T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Katja Debelak</author><author>Primož Pevcin</author><author>Rok Hržica</author>
        <description><![CDATA[IntroductionThis study was prompted by the rapid acceleration of AI capabilities in the trans former era since 2018 and the concurrent regulatory shift that elevated legal accountability and public governance to central policy and research priorities. It contributes by treating 2018–2025 as a distinct governance regime in which transformer-enabled capability scaling and foundation models shifted AI gov ernance debates toward enforceable accountability architectures.MethodsThe study maps the regulation–accountability–public governance nexus as an operational problem: which accountability forums dominate, which regulatory instruments anchor in the field, and which public-administration mechanisms remain underdeveloped. Using multiple queries in the Web of Science Core Collection, validated with Scopus, and analyzed with Bibliometrix and complementary science-mapping techniques, the study examines publication trends, influen tial contributors and outlets, collaboration networks, and citation and thematic structures.ResultsPublication output increases sharply after 2022, aligning with major regulatory milestones such as the EU AI Act. Results show a strong European concentration, with European actors serving as central hubs in collaboration networks and indicate that 2018–2021 publications form a foundational intel lectual core. The field is anchored in legally oriented concepts (law, transparency, governance, accountability, data protection), while themes such as legitimacy, institutional logics, and rights operationalization remain underdeveloped.DiscussionDespite growing interdisciplinarity, thematic fragmentation persists, highlighting the need for stronger integration across legal scholarship, public administration, and tech nical AI research, and providing a focused basis for future research and policy agendas.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1719225</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1719225</link>
        <title><![CDATA[Beyond mediation: an evolutionary benchmark for emotionally and normatively competent AI]]></title>
        <pubdate>2026-04-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kohei Oshio</author>
        <description><![CDATA[This study presents an agent-based simulation framework for assessing the adaptive and emotional abilities required of AI systems acting as mediators in social and legal environments, where being right does not necessarily mean achieving fairness and stability. Unlike benchmarks based on logical inference or computational efficiency, the framework measures three key traits of effective dispute resolution: emotional regulation, willingness to compromise, and post-conflict trust repair. We defined five canonical negotiation tasks that vary in psychological reactivity, escalation, fatigue in compromise, and interaction-style asymmetry (S1, H1, H2, H3, C1), comparing five mediator policies under identical stochastic conditions using independent Monte Carlo episodes per condition with pre-specified endpoints: agreement probability, conditional time to agreement, inequality of outcomes, robustness across tasks, and a composite score with pre-specified weights that makes efficiency–equity–stability trade-offs explicit. An evolutionary analysis examined long-run policy selection. The results revealed stable trade-offs across scenarios: strong control shortens negotiations yet systematically worsens inequality and does not reliably increase agreement, whereas transparent, weak interventions more consistently balance efficiency and fairness across heterogeneous conditions; slight, reversible framing improves performance primarily in high-reactivity settings. Evolutionary selection favors policies that preserve cross-task robustness rather than optimizing a single metric. The composite objective is intended as a configurable governance parameter rather than a universal social welfare function. Since the baseline calibration intentionally conserves emotional amplitude, we present the current release as an emotion-light v1 benchmark and outline specific v2 enhancements, including event-driven shocks. Empirically calibrated reactivity ranges that make emotion and repair more distinctive. Overall, the benchmark redefines AI evaluation for mediation as process control within the context of emotion and norms, and provides a reproducible protocol for assessing both human and AI mediators.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1802986</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1802986</link>
        <title><![CDATA[Judicial review of algorithmic administrative systems legality evidence and remedies in the smart city state]]></title>
        <pubdate>2026-04-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Khalid Saleem Ameen</author><author>Tavga Abbas Towfiq</author><author>Kawar Mohammed Mousa</author><author>Mahrokh Pooyanmehr</author>
        <description><![CDATA[Smart city programs are increasingly used by governments to manage public services through digital and automated systems, and this development is closely linked with questions of administrative power, fairness, and accountability. Existing public law doctrines in common law systems were shaped for human decision-making and have not been fully adapted to administrative action carried out through algorithmic systems, which creates a clear gap in current legal thinking. The research is motivated by the growing use of algorithmic systems as effective decision-makers in areas such as utilities, mobility, welfare, and digital identity, especially in contexts marked by limited state capacity and prolonged emergency conditions. The study aims to examine how judicial review should respond to algorithmic administrative systems so that legality, fairness, and accountability remain protected while legitimate administrative goals are still met. Methodologically, the article adopts a doctrinal and normative legal research design based on structured analysis of public law doctrine, relevant judicial and administrative materials, and governance instruments on automated decision-making. The focus is on developing a doctrinal framework that treats algorithmic systems as legally reviewable decision infrastructures rather than neutral technical tools. The study highlights the importance of reviewing not only final automated outcomes but also earlier design choices, including data use, system objectives, and oversight mechanisms. The research is important because it offers courts and lawmakers clearer legal tools to assess algorithmic administration, with particular attention to settings where oversight is weak and emergency powers risk becoming normalized, increasing the danger of opacity, discrimination, and unchecked security repurposing.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1781692</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1781692</link>
        <title><![CDATA[Limitations of current copyright frameworks for large language models trained on scientific literature]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Hypothesis and Theory</category>
        <author>Yuanyuan Huang</author><author>Hui Liu</author>
        <description><![CDATA[Recent copyright lawsuits against artificial intelligence (AI) and large language model (LLM) developers have ignited debates over how to balance technological innovation with the public interest. In the scientific research field, the performance and reliability of LLMs trained on scientific literature (SciLit-LLMs) depend heavily on access to comprehensive, up-to-date full-text sources. This paper argues that the current copyright framework, including the U.S. fair use doctrine, often regarded as a flexible solution for AI-related copyright issues, is ill-suited for SciLit-LLMs. First, the normative values emphasized in scientific—such as accuracy, transparency and interpretability—fundamentally conflict with the “transformative use” requirement central to copyright law. Second, the expression of scientific literature is intended to ensure scientific precision rather than to convey creative originality, remains insufficiently considered under current copyright law. Third, the fair use doctrine’s emphasis on limiting the proportion of use from a single copyrighted work contradicts the need for comprehensive training on information-dense scientific texts. Finally, commercial use restrictions impede the sustainable development of SciLit-LLMs and preclude a mutually beneficial model for researchers, publishers, developers, and the public. Imposing current copyright restrictions on these models is unjustified, unnecessary, and risks perpetuating scientific biases. We therefore propose reconstructing copyright exceptions for scientific literature and removing commercial use restrictions to better support scientific innovation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1782405</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1782405</link>
        <title><![CDATA[Technical evaluation of language models adapted for the automation of legal contracts: clause extraction, classification, and summarization]]></title>
        <pubdate>2026-03-26T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jaime Govea</author><author>Iván Ortiz-Gárces</author><author>Pablo Palacios</author><author>Alexandra Maldonado Navarro</author><author>Santiago Acurio Del Pino</author><author>William Villegas-Ch</author>
        <description><![CDATA[The growing demand for automation in legal contract management exposes a persistent limitation of current language models: insufficient adaptation to the semantic, structural, and regulatory constraints of legal language. While large language models perform well on general NLP tasks, their direct application to legal document classification, clause extraction, and contract summarization often yields unstable, legally unreliable outputs. This work presents a structured methodological pipeline for evaluating and adapting language models for legal contract automation, combining domain-specific fine-tuning of open-source models with a controlled comparative assessment against large general-purpose LLMs used exclusively in inference mode. The methodology integrates legal corpus curation, clause-level annotation, and efficient adaptation techniques, and is evaluated across three core tasks: contract document classification, normative clause extraction, and regulatory summarization. The evaluation protocol is explicitly designed to disentangle the effects of supervision from deployment constraints arising in regulated legal settings. Experimental results show consistent and statistically significant performance gains for legally adapted models over general-purpose baselines, achieving Macro-F1 of 0.921 in classification, span-level F1 of 0.903 in clause extraction, and ROUGE-L of 0.886 in summarization (p < 0.01). Robustness analysis and cross-validation confirm stability across heterogeneous private-sector contract types. The findings should be interpreted under the evaluated comparison regime and highlight that, in legally constrained multi-stage workflows, task-aligned supervision provides measurable structural benefits that are not reducible to model scale alone when general-purpose LLMs are restricted to inference-only deployment.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1716094</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1716094</link>
        <title><![CDATA[Psychological features of dispute content and public acceptance of AI in legal adjudication: evidence for systematic variation beyond individual differences]]></title>
        <pubdate>2026-03-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Masahiro Fujita</author><author>Eiichiro Watamura</author>
        <description><![CDATA[Public acceptance of artificial intelligence (AI) in legal decision-making has been primarily explained through individual differences in personality traits and general attitudes toward technology. However, emerging evidence suggests that contextual features of legal disputes themselves may systematically influence preferences for AI versus human adjudicators. Across two studies with Japanese participants (N = 1,384 and N = 596), we examined whether psychological characteristics of dispute content—beyond demographics and individual traits—shape acceptability judgments for algorithmic adjudication. Study 1 employed exploratory factor analysis on acceptability ratings across 46 legal dispute vignettes, revealing a robust two-dimensional structure distinguishing interpersonal-relational disputes (where human adjudicators were strongly preferred) from institutional-procedural disputes (where AI acceptance was comparatively higher, though not surpassing human preference in most cases). Study 2 replicated this dimensional structure in an independent sample and demonstrated that experimentally manipulated contextual features—emotional involvement and prototypicality—systematically modulated acceptability judgments, with effects varying by dispositional trust, AI-specific attitudes, and gender. AI-specific expectations emerged as the strongest predictor of acceptance (η2 = 0.252), and a three-way interaction among emotional involvement, gender, and prototypicality indicated that contextual effects are moderated by individual characteristics. These findings suggest that the psychological features of dispute content constitute an overlooked dimension in AI acceptance research, extending beyond technology acceptance models to fundamental questions about how individuals construe social problems and allocate adjudicative authority. We discuss limitations related to measurement approaches, alternative psychological mechanisms, and directions for future research employing real-world case materials and direct assessment of cognitive processes.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1759211</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1759211</link>
        <title><![CDATA[Audit-as-code: a policy-as-code framework for continuous AI assurance]]></title>
        <pubdate>2026-02-26T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Aoun E. Muhammad</author><author>Kin-Choong Yow</author><author>Shrooq Alsenan</author>
        <description><![CDATA[IntroductionExisting AI assurance and governance frameworks rely heavily on documented written policies and manual reviews of the implementation. The primary challenge is not the length of these documents, but to operationalize the gap from transforming qualitative requirements into verifiable controls. This approach makes ensuring continuous compliance through the development life cycle hard to enforce, scale, and reproduce.MethodsThis study presents a continuous assurance framework called Audit-as-Code that maps governance requirements to technically-auditable rules, that can be a combination of versioned policy specification and executable checks for evidence artifacts, linked to structured evidence regarding data, models, provenance, performance, decisions, and explanations regarding the decisions being made. While the framework addresses the governance and regulatory mapping requirements, the primary focus of this study is MLOps/CI-CD (continuous integration/continuous delivery) operationalization, and turning these requirements into deterministic checks and gate decisions integrated in operational workflows. In this study, we introduce an assured readiness score that integrates the governance risk with other key responsible AI principles, such as traceability and explainability. This approach helps in aligning deployment decisions with predefined risk tiers, and the framework automates decisions on whether a system can proceed, requires remediation and fixes, or should be blocked. It also provides targeted suggestions for improvement and compliance for the lags identified.ResultsWe evaluated this framework on representative AI systems and demonstrated how a single evidence bundle can be used to support assessment across different governance regulations.DiscussionIn doing so, Audit-as-Code ensures AI assurance transforms from a documentation-driven policy module to a quantitative, auditable, reproducible, and operationally practical module to ensure compliance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2025.1742239</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2025.1742239</link>
        <title><![CDATA[AI and digital justice in EU labor law. A comparative study on predictive tools and judicial transformation]]></title>
        <pubdate>2026-01-29T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Marianna Molinari</author><author>Marco Giacalone</author>
        <description><![CDATA[This paper examines how artificial intelligence (AI) and digital tools are reshaping European labor law litigation, particularly in redundancy disputes. Conducted within the I-Tools To Design And Enhance Access To Justice (IDEA) project, it draws on a comparative survey across six Member States—Belgium, Croatia, the Czech Republic, Estonia, Italy, and Lithuania—to identify best practices in digitalized court systems. The findings point to uneven digital maturity: Estonia and Lithuania lead in digital development (8.4/10; 8.0/10) and show more favorable attitudes toward predictive justice (6.0/10; 5.8/10); whereas Belgium, Croatia, the Czech Republic, and Italy, despite having digital tools, struggle to integrate them into routine legal workflows, reinforcing greater resistance to predictive justice. Although digital justice can improve access and efficiency, concerns persist around fairness, judicial trust, and ethical safeguards. Progress, the study suggests, depends on participatory governance, clear regulation, and the careful integration of technology into procedural frameworks. Building on these insights, the current phase of IDEA positions AI as a regulatory instrument that structures access to justice and guides user behavior. The consortium is developing a legal chatbot to guide workers and employers toward the most suitable resolution pathway—negotiation, mediation, or litigation—based on context, cost, and procedural guarantees. The chatbot tailors its responses in accordance with prevailing legal trends and the reasoning behind case law. A pilot in three Member States (MSs) will test its potential to enhance transparency, empower users, and promote proportional, informed dispute resolution within European labor justice. Against this backdrop, the chapter conceptualizes AI—operating through structured information, triage, and explainability—as a regulatory instrument that can steer behavior ex ante and support compliance ex post in redundancy disputes, complementing adjudication without displacing judicial authority.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2025.1718613</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2025.1718613</link>
        <title><![CDATA[Testing the applicability of a governance checklist for high-risk AI-based learning outcome assessment in Italian universities under the EU AI act annex III]]></title>
        <pubdate>2025-12-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Flavio Manganello</author><author>Alberto Nico</author><author>Martina Ragusa</author><author>Giannangelo Boccuzzi</author>
        <description><![CDATA[BackgroundThe EU AI Act classifies AI-based learning outcome assessment as high-risk (Annex III, point 3b), yet sector-specific frameworks for institutional self-assessment remain underdeveloped. This creates accountability gaps affecting student rights and educational equity, as institutions lack systematic tools to demonstrate that algorithmic assessment systems produce valid and fair outcomes.MethodsThis exploratory study tests whether ALTAI’s trustworthy AI requirements can be operationalized for educational assessment governance through the XAI-ED Consequential Assessment Framework, which integrates three educational evaluation theories (Messick’s consequential validity, Kirkpatrick’s four-level model, Stufflebeam’s CIPP). Following pilot testing with three institutions, four independent coders applied a 27-item checklist to policy documents from 14 Italian universities (13% with formal AI policies plus one baseline case) using four-point ordinal scoring and structured consensus procedures.ResultsIntercoder reliability analysis revealed substantial agreement (Fleiss’s κ = 0.626, Krippendorff’s α = 0.838), with higher alpha reflecting predominantly adjacent-level disagreements suitable for exploratory validation. Analysis of 14 universities reveals substantial governance heterogeneity among early adopters (Institutional Index: 0.00–60.32), with Technical Robustness and Safety showing lowest implementation (M = 19.64, SD = 21.08) and Societal Well-being highest coverage (M = 52.38, SD = 29.38). Documentation prioritizes aspirational statements over operational mechanisms, with only 13% of Italian institutions having adopted AI policies by September 2025.DiscussionThe framework demonstrates feasibility for self-assessment but reveals critical misalignment: universities document aspirational commitments more readily than technical safeguards, with particularly weak capacity for validity testing and fairness monitoring. Findings suggest three interventions: (1) ministerial operational guidance translating EU AI Act requirements into educational contexts, (2) inter-institutional capacity-building addressing technical-pedagogical gaps, and (3) integration of AI governance indicators into national quality assurance systems to enable systematic accountability. The study contributes to understanding how educational evaluation theory can inform the translation of abstract trustworthy AI principles into outcome-focused institutional practices under high-risk classifications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2025.1688209</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2025.1688209</link>
        <title><![CDATA[Evaluating AI decision tools in Ecuador’s courts: efficiency, consistency, and uncertainty in legal judgments]]></title>
        <pubdate>2025-11-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Eliana Rodríguez-Salcedo</author><author>Carlos Martínez-Bonilla</author><author>Betty Pérez-Mayorga</author><author>Mónica Salame-Ortiz</author><author>Pamela Armas-Freire</author><author>Anita Espín-Miniguano</author><author>Eulalia Pino-Loza</author>
        <description><![CDATA[This study explores the impact of AI-based decision support tools on judicial performance in Ecuador, a context characterized by institutional uncertainty and procedural inefficiencies. It assesses whether such tools improve efficiency, consistency, and the normative quality of legal reasoning in judicial decisions. A mixed-methods approach was applied to analyze fifty court cases before and after AI implementation. Quantitative analysis used t-tests, Levene’s test, and Mann–Whitney U test to evaluate procedural duration and inter-rater agreement, while natural language processing techniques, including topic modeling (LDA) and sentiment analysis (VADER), assessed changes in semantic structure and argumentation. In parallel, a content analysis of twelve policy and regulatory documents was conducted to examine changes in algorithmic governance discourse. The results show a statistically significant reduction in case resolution time (−23.5 days), an increase in inter-evaluator consistency (Cohen’s kappa from 0.65 to 0.80), a shift toward more neutral-technical language, and greater density of legal citations. Mentions of governance principles such as transparency and accountability also increased. These findings indicate that AI-based tools, when used as assistive systems, can enhance judicial decision-making in uncertain environments without displacing human deliberation. While the study provides robust initial evidence, its exploratory sample and reliance on interpretable NLP techniques reflect the constraints of a low-resource judicial context and highlight avenues for future research. This research contributes to the literature on advanced analytical methods for institutional decision-making under legal and epistemic uncertainty.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2025.1671474</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2025.1671474</link>
        <title><![CDATA[Legal Logit Model for predicting judicial disagreement in Indian courts]]></title>
        <pubdate>2025-10-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sivaranjani N</author><author>Jayabharathy J</author>
        <description><![CDATA[Once a case reaches the Supreme Court on appeal, the justices may either affirm or reverse the judgment of the lower court. Forecasting such judicial disagreement is important not only for predicting outcomes but also for understanding the judge-specific and case-specific factors that drive these decisions. This study aimed to present the Legal Logit Model (LLM), an evolved neural network-based version of the Multinomial Logit (MNL) model. The LLM combines the interpretability of discrete choice theory with the flexibility of neural networks. Therefore, it is capable of modeling complex, non-linear interactions while preserving transparency about the influence of individual features. Utilizing features extracted from both cases and judges, the model predicts whether the Supreme Court will reverse a lower court's ruling and highlights the factors most strongly associated with disagreement. When tested on a dataset of Supreme Court opinions, the LLM achieves 80% accuracy in predicting outcomes, outperforming conventional logit and deep learning-based models. Despite the possibility of motivated reasoning in Supreme Court opinions, limiting causal interpretation, the findings show that the LLM presents an interpretable and effective predictive framework applicable to the study of judicial decision-making.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2025.1637134</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2025.1637134</link>
        <title><![CDATA[Algorithmic fairness: challenges to building an effective regulatory regime]]></title>
        <pubdate>2025-08-29T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>Greg Demirchyan</author>
        <description><![CDATA[Unfair treatment by artificial intelligence toward protected groups has become an important topic of discussion. Its potential for causing harm has spurred many to think that legislation aimed at regulating AI systems is essential. In the US, laws have already been proposed both by Congress as well as by several key states. However, a number of challenges stand in the way of effective legislation. Proposed laws mandating testing for fairness must articulate clear positions on how fairness is defined. But the task of selecting a suitable definition (or definitions) of fairness is not a simple one. Experts in AI continue to disagree as to what constitutes algorithmic fairness, which has led to an ever-expanding list of definitions that are highly technical in nature and require expertise that most legislators simply do not possess. Complicating things further, several of the proposed definitions are incommensurable with one another, making a cross-jurisdictional regulatory regime incorporating different standards of fairness susceptible to inconsistent determinations. On top of all this, legislators must also contend with existing laws prohibiting group-based discrimination that codify conceptions of fairness that may not be suitable for evaluating certain algorithms. In this article, I examine these challenges in detail, and suggest ways to deal with them such that the regulatory regime that emerges is one that is more effective in carrying out its intended purpose.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2025.1561322</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2025.1561322</link>
        <title><![CDATA[Stakeholder-specific adoption of AI in HRM: workers’ representatives’ perspective on concerns, requirements, and measures]]></title>
        <pubdate>2025-05-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Christine Malin</author><author>Jürgen Fleiß</author><author>Stefan Thalmann</author>
        <description><![CDATA[IntroductionAI regulations aim to balance AI’s potential and risks in general and human resource management (HRM) in particular. However, regulations are not finally defined and the perspectives of key stakeholders of HRM applications are not clear yet. Research on AI in HRM contributes only to a limited extent to the understanding of key HRM stakeholders, and the perspective of workers’ representatives is especially lacking so far.MethodsThis paper presents a study of three focus group workshops investigating workers’ representatives’ perspectives, to determine which concerns they perceive when using AI in HRM, which resulting requirements they have for adopting AI in HRM, and which measures they perceive as most suitable to fulfill them.ResultsOur results revealed that workers’ representatives were critical of using AI across all HRM phases, particularly in personnel selection. We identified requirements and measures for adopting AI in HRM from the perspective of workers’ representatives. These were summarized in a catalog including six dimensions: control, human oversight, responsibilities, transparency and explainability, lawful AI, and data security.DiscussionOur findings shed a nuanced light on workers’ representatives’ needs, providing relevant insights for research on stakeholder-oriented adoption of AI in HRM and for specifying current AI regulations.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2024.1447171</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2024.1447171</link>
        <title><![CDATA[Political ideology shapes support for the use of AI in policy-making]]></title>
        <pubdate>2024-10-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Tamar Gur</author><author>Boaz Hameiri</author><author>Yossi Maaravi</author>
        <description><![CDATA[In a world grappling with technological advancements, the concept of Artificial Intelligence (AI) in governance is becoming increasingly realistic. While some may find this possibility incredibly alluring, others may see it as dystopian. Society must account for these varied opinions when implementing new technologies or regulating and limiting them. This study (N = 703) explored Leftists’ (liberals) and Rightists’ (conservatives) support for using AI in governance decision-making amidst an unprecedented political crisis that washed through Israel shortly after the proclamation of the government’s intentions to initiate reform. Results indicate that Leftists are more favorable toward AI in governance. While legitimacy is tied to support for using AI in governance among both, Rightists’ acceptance is also tied to perceived norms, whereas Leftists’ approval is linked to perceived utility, political efficacy, and warmth. Understanding these ideological differences is crucial, both theoretically and for practical policy formulation regarding AI’s integration into governance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2024.1411838</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2024.1411838</link>
        <title><![CDATA[Governing AI in Southeast Asia: ASEAN’s way forward]]></title>
        <pubdate>2024-08-30T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>Bama Andika Putra</author>
        <description><![CDATA[Despite the rapid development of AI, ASEAN has not been able to devise a regional governance framework to address relevant existing and future challenges. This is concerning, considering the potential of AI to accelerate GDP among ASEAN member states in the coming years. This qualitative inquiry discusses AI governance in Southeast Asia in the past 5 years and what regulatory policies ASEAN can explore to better modulate its use among its member states. It considers the unique political landscape of the region, defined by the adoption of unique norms such as non-interference and priority over dialog, commonly termed the ASEAN Way. The following measures are concluded as potential regional governance frameworks: (1) Elevation of the topic’s importance in ASEAN’s intra and inter-regional forums to formulate collective regional agreements on AI, (2) adoption of AI governance measures in the field of education, specifically, reskilling and upskilling strategies to respond to future transformation of the working landscape, and (3) establishment of an ASEAN working group to bridge knowledge gaps among member states, caused by the disparity of AI-readiness in the region.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2024.1320277</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2024.1320277</link>
        <title><![CDATA[Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices]]></title>
        <pubdate>2024-05-21T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Xukang Wang</author><author>Ying Cheng Wu</author><author>Xueliang Ji</author><author>Hongpeng Fu</author>
        <description><![CDATA[IntroductionAlgorithmic decision-making systems are widely used in various sectors, including criminal justice, employment, and education. While these systems are celebrated for their potential to enhance efficiency and objectivity, they also pose risks of perpetuating and amplifying societal biases and discrimination. This paper aims to provide an indepth analysis of the types of algorithmic discrimination, exploring both the challenges and potential solutions.MethodsThe methodology includes a systematic literature review, analysis of legal documents, and comparative case studies across different geographic regions and sectors. This multifaceted approach allows for a thorough exploration of the complexity of algorithmic bias and its regulation.ResultsWe identify five primary types of algorithmic bias: bias by algorithmic agents, discrimination based on feature selection, proxy discrimination, disparate impact, and targeted advertising. The analysis of the U.S. legal and regulatory framework reveals a landscape of principled regulations, preventive controls, consequential liability, self-regulation, and heteronomy regulation. A comparative perspective is also provided by examining the status of algorithmic fairness in the EU, Canada, Australia, and Asia.ConclusionReal-world impacts are demonstrated through case studies focusing on criminal risk assessments and hiring algorithms, illustrating the tangible effects of algorithmic discrimination. The paper concludes with recommendations for interdisciplinary research, proactive policy development, public awareness, and ongoing monitoring to promote fairness and accountability in algorithmic decision-making. As the use of AI and automated systems expands globally, this work highlights the importance of developing comprehensive, adaptive approaches to combat algorithmic discrimination and ensure the socially responsible deployment of these powerful technologies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2024.1333219</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2024.1333219</link>
        <title><![CDATA[Editorial: Hammer or telescope? Challenges and opportunities of science-oriented AI in legal and sociolegal research]]></title>
        <pubdate>2024-04-26T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Nicola Lettieri</author><author>Alessandro Pluchino</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2024.1285026</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2024.1285026</link>
        <title><![CDATA[A dynamic approach for visualizing and exploring concept hierarchies from textbooks]]></title>
        <pubdate>2024-02-08T00:00:00Z</pubdate>
        <category>Technology and Code</category>
        <author>Sabine Wehnert</author><author>Praneeth Chedella</author><author>Jonas Asche</author><author>Ernesto William De Luca</author>
        <description><![CDATA[In this study, we propose a visualization technique to explore and visualize concept hierarchies generated from a textbook in the legal domain. Through a human-centered design process, we developed a tool that allows users to effectively navigate through and explore complex hierarchical concepts in three kinds of traversal techniques: top-down, middle-out, and bottom-up. Our concept hierarchies offer an overview over a given domain, with increasing level of detail toward the bottom of the hierarchy which is consisting of entities. In the legal use case we considered, the concepts were adapted from section headings in a legal textbook, whereas references to law or legal cases inside the textbook became entities. The design of this tool is refined following various steps such as gathering user needs, pain points of an existing visualization, prototyping, testing, and refining. The resulting interface offers users several key features such as dynamic search and filter, explorable concept nodes, and a preview of leaf nodes at every stage. A high-fidelity prototype was created to test our theory and design. To test our concept, we used the System Usability Scale as a way to measure the prototype's usability, a task-based survey to asses the tool's ability in assisting users in gathering information and interacting with the prototype, and finally mouse tracking to understand user interaction patterns. Along with this, we gathered audio and video footage of users when participating in the study. This footage also helped us in getting feedback when the survey responses required further information. The data collected provided valuable insights to set the directions for extending this study. As a result, we have accounted for varying hierarchy depths, longer text spans than only one to two words in the elements of the hierarchy, searchability, and exploration of the hierarchies. At the same time, we aimed for minimizing visual clutter and cognitive overload. We show that existing approaches are not suitable to visualize the type of data which we support with our visualization.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2023.1282020</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2023.1282020</link>
        <title><![CDATA[The use of automation in the rendition of certain articles of the Saudi Commercial Law into English: a post-editing-based comparison of five machine translation systems]]></title>
        <pubdate>2024-01-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Rafat Y. Alwazna</author>
        <description><![CDATA[Efforts to automate translation were made in the 1950s and 1960s, albeit with limited resources compared to current advanced standards. Machine translation is categorised under computational linguistics that examines employing computer software in the rendition of text from one language into another. The present paper seeks to compare five different machine translation systems for the sake of assessing the quality of their outputs in rendering certain articles of the Saudi Commercial Law into English through post-editing based on Human Translation Edit Rate. Each machine translation output is assessed against the same post-edited version, and the closest output to the post-edited version with regard to the use of the same lexicon and word order will achieve the lowest score. The lower the score of the machine translation output is, the higher quality it has. The paper then analyses the results of the Human Translation Edit Rate metric evaluation to ascertain as to whether or not high-quality machine translation outputs always produce acceptable Arabic–English legal translation. The present paper argues that the use of Human Translation Edit Rate metric is a useful tool for the sake of undertaking post-editing procedures as it is a combination of both human evaluation as well as automatic evaluation. It is also advantageous as it takes account of both the use of lexicon and word order. However, such metric cannot be sufficiently depended on as one term substitution, which will be counted according to this metric as a single error, may render the whole sentence invalid, particularly in legal translation. This paper offers a baseline for the quality assessment of machine translation output through post-editing based on Human Translation Edit Rate metric and how its results should be analysed within Arabic–English legal translation context, which may have implications for similar machine translation output quality assessment contexts.]]></description>
      </item>
      </channel>
    </rss>