Skip to main content

About this Research Topic

Submission closed.

Artificial intelligence (AI) has come a long way since Dartmouth in 1956. It is increasingly becoming an integral part of the world we live in. We are now in an AI era where data, storage, (open source) software, algorithms, super/cloud computing are available at low cost. The availability problem of AI-based ...

Artificial intelligence (AI) has come a long way since Dartmouth in 1956. It is increasingly becoming an integral part of the world we live in. We are now in an AI era where data, storage, (open source) software, algorithms, super/cloud computing are available at low cost. The availability problem of AI-based technologies is almost solved, and many AI-based financial services and products are now deployed at the enterprise level. The financial services industry (FSI) must develop these technologies on a large scale to remain relevant and be "AI-first". This will bring incredible benefits in terms of efficiency, quality, speed, autonomy, and inclusion.

However, extensively utilizing AI also involves risks, such as the explanation gap (the black box problem) and the trust problem, which are an integral part of FSI. Terms such as ethics, fairness, unbiasedness, robustness, sustainability, standardization, algorithm registration, and others also enter the discussion. For this collection of articles, we call for paper submissions that address solutions, approaches, (model) risk management, and standardization tools for an explainable, trustworthy, and responsible AI. Potential subtopics could include strategies to review, monitor, interpret, measure, automate and standardize AI models, and help in AI compliance, risk management, and audit. The manuscripts could focus on processes, tools, frameworks, and even AI or Advanced Analytics itself, including synthetically generated data. They can also advance visual-analytical tools to explain data, decisions, and algorithms.

Such research will improve AI control and reduce AI incidents. It will be essential for regulators, consumers, citizens, and the FSI itself (as product/service owners or senior management), and data science teams will be better able to manage the complexity associated with AI. Each specific audience needs its own approaches and tools to reduce this complexity. On top of this fundamental need for explainability, the financial sector faces increasingly sophisticated adversaries having the capabilities to execute large scale data breaches, data tempering, and loss of confidential information. This threat similarly calls for robust and stable methods that can handle noise and persist in view of adversarial corruption of data.

In this context, this Research Topic aims to include original papers proposing innovative methodologies for global or local explanations as well as assessing fairness and robustness of AI-based systems applied to financial problem sets. We welcome submissions centered around these topics and applied to all areas of financial services, products, and regulations.

Keywords: ExplainableAI, FSI, ResponsibleAI, Artificial Intelligence, Finance


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Loading..

Topic Coordinators

Loading..

Recent Articles

Loading..

Articles

Sort by:

Loading..

Authors

Loading..

views

total views views downloads topic views

}
 
Top countries
Top referring sites
Loading..

Share on

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.