Search and recommendation are the main ways people access information and services online. They shape decisions in many real-world settings, from career development and workforce planning to travel planning and everyday information seeking on large platforms. As these systems move from research to deployment, they must handle practical constraints: large and fast-changing catalogs, sparse or noisy interaction data, cold-start users and items, and context that shifts with time, location, and social setting.
At the same time, the rapid rise of large language models (LLMs) is changing how we build and assess information access systems. LLMs can support conversational interaction, semantic understanding, and content generation, and they can be integrated across the pipeline, from query understanding and ranking to explanation and evaluation. These capabilities open new opportunities, but they also raise sharper requirements around trustworthiness (privacy, bias, fairness, transparency, safety, robustness, and generalization). In high-impact domains such as HR, and increasingly in consumer-facing platforms, these issues are amplified by regulatory and societal pressures (e.g., emerging AI governance frameworks) and by the real costs of mistakes.
This Research Topic focuses on responsible and robust evaluation for recommendation and search systems used in real-world decision support. The papers reflect complementary perspectives:
(i) time-robust group recommendation to support comment featuring and moderation decisions on news platforms under changing content cycles;
(ii) career path prediction with stronger, more realistic evaluation setups and modeling choices to support HR decision-making;
(iii) LLM-as-a-judge evaluation frameworks for structured outputs in information access tasks such as search query parsing; and
(iv) multistakeholder fairness in tourism, drawing on tourism management to broaden how we define and evaluate fairness and impact beyond purely algorithmic criteria.
Together, these contributions emphasize evaluation practices that better match deployment conditions, stakeholder needs, and the reliability requirements of real-world information access systems.
Keywords: Recommender Systems, Personalized Tourism, User Modeling, Travel Recommendations, RecSys, Responsible evaluation, Robustness and generalization, Information retrieval and search, Large language models (LLMs), LLM-as-a-judge evaluation, Career path prediction, Group recommendation, Content moderation on news platforms, Multistakeholder fairness in tourism, Generative Search, Generative Recommendation, Human Resources
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.