Over the past decade, the intersection of artificial intelligence and public policy research has been dramatically influenced by the evolution of large language models (LLMs). These deep-learning-based neural networks have transformed the ways in which language is analysed and generated within the social sciences, providing novel capabilities for text generation, comprehension, and classification. As LLMs become increasingly integrated into the workflows of political scientists and policy researchers, they open up possibilities for scalable, data-driven investigations into policymaking, political communication, and governance. However, this rapid technological advancement is accompanied by ongoing debates regarding methodological reliability, normative implications, and the need for robust validation frameworks. Recent studies have shown both the efficacy and limitations of LLMs for topics such as automated sentiment analysis, ideological scaling, and policy simulation, highlighting existing knowledge gaps in transparency, cultural inclusivity, and the ethical deployment of these tools in sensitive decision-making contexts.
This Research Topic aims to showcase how large language models can bridge computational advancements and policy scholarship by focusing on their empirical applications, methodological innovations, and the broader normative concerns they pose. The objective is to elucidate best practices for integrating LLMs into public policy research, provide comparative assessments with traditional analytical methods, and critically examine the wider societal implications of their use in governance and political discourse. Central questions include: How do LLMs enable new forms of policy analysis? What standards should guide their validation and interpretation in political research? What risks and opportunities arise when embedding LLM-based systems across public-sector contexts?
Submissions are encouraged that explore both opportunities and challenges in this evolving field. The scope is limited to research that directly contributes to the methodological, empirical, and normative understanding of LLMs in public policy. To gather further insights, we welcome articles addressing, but not limited to, the following themes:
• empirical case studies utilising LLMs in policy or political analysis
• innovations in validating and interpreting LLM outputs for policy research
• comparison of LLM performance with conventional analytical approaches (e.g., supervised learning, manual coding)
• ethical, normative, and governance implications of LLM use in policy settings
• strategies for enhancing inclusivity, accountability, and transparency in LLM-based research.
We welcome a diversity of article types, including original research, methodological papers, perspective pieces, reviews, and policy briefs.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Policy and Practice Reviews
Policy Brief
Registered Report
Review
Study Protocol
Systematic Review
Technology and Code
Keywords: LLMs, Policy research, Computational social science, Validation, Ethics, Transparency
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.