Artificial intelligence (AI) has come a long way since Dartmouth in 1956. It is increasingly becoming an integral part of the world we live in. We are now in an AI era where data, storage, (open source) software, algorithms, super/cloud computing are available at low cost. The availability problem of AI-based technologies is almost solved, and many AI-based financial services and products are now deployed at the enterprise level. The financial services industry (FSI) must develop these technologies on a large scale to remain relevant and be "AI-first". This will bring incredible benefits in terms of efficiency, quality, speed, autonomy, and inclusion.
However, extensively utilizing AI also involves risks, such as the explanation gap (the black box problem) and the trust problem, which are an integral part of FSI. Terms such as ethics, fairness, unbiasedness, robustness, sustainability, standardization, algorithm registration, and others also enter the discussion. For this collection of articles, we call for paper submissions that address solutions, approaches, (model) risk management, and standardization tools for an explainable, trustworthy, and responsible AI. Potential subtopics could include strategies to review, monitor, interpret, measure, automate and standardize AI models, and help in AI compliance, risk management, and audit. The manuscripts could focus on processes, tools, frameworks, and even AI or Advanced Analytics itself, including synthetically generated data. They can also advance visual-analytical tools to explain data, decisions, and algorithms.
Such research will improve AI control and reduce AI incidents. It will be essential for regulators, consumers, citizens, and the FSI itself (as product/service owners or senior management), and data science teams will be better able to manage the complexity associated with AI. Each specific audience needs its own approaches and tools to reduce this complexity. On top of this fundamental need for explainability, the financial sector faces increasingly sophisticated adversaries having the capabilities to execute large scale data breaches, data tempering, and loss of confidential information. This threat similarly calls for robust and stable methods that can handle noise and persist in view of adversarial corruption of data.
In this context, this Research Topic aims to include original papers proposing innovative methodologies for global or local explanations as well as assessing fairness and robustness of AI-based systems applied to financial problem sets. We welcome submissions centered around these topics and applied to all areas of financial services, products, and regulations.
Artificial intelligence (AI) has come a long way since Dartmouth in 1956. It is increasingly becoming an integral part of the world we live in. We are now in an AI era where data, storage, (open source) software, algorithms, super/cloud computing are available at low cost. The availability problem of AI-based technologies is almost solved, and many AI-based financial services and products are now deployed at the enterprise level. The financial services industry (FSI) must develop these technologies on a large scale to remain relevant and be "AI-first". This will bring incredible benefits in terms of efficiency, quality, speed, autonomy, and inclusion.
However, extensively utilizing AI also involves risks, such as the explanation gap (the black box problem) and the trust problem, which are an integral part of FSI. Terms such as ethics, fairness, unbiasedness, robustness, sustainability, standardization, algorithm registration, and others also enter the discussion. For this collection of articles, we call for paper submissions that address solutions, approaches, (model) risk management, and standardization tools for an explainable, trustworthy, and responsible AI. Potential subtopics could include strategies to review, monitor, interpret, measure, automate and standardize AI models, and help in AI compliance, risk management, and audit. The manuscripts could focus on processes, tools, frameworks, and even AI or Advanced Analytics itself, including synthetically generated data. They can also advance visual-analytical tools to explain data, decisions, and algorithms.
Such research will improve AI control and reduce AI incidents. It will be essential for regulators, consumers, citizens, and the FSI itself (as product/service owners or senior management), and data science teams will be better able to manage the complexity associated with AI. Each specific audience needs its own approaches and tools to reduce this complexity. On top of this fundamental need for explainability, the financial sector faces increasingly sophisticated adversaries having the capabilities to execute large scale data breaches, data tempering, and loss of confidential information. This threat similarly calls for robust and stable methods that can handle noise and persist in view of adversarial corruption of data.
In this context, this Research Topic aims to include original papers proposing innovative methodologies for global or local explanations as well as assessing fairness and robustness of AI-based systems applied to financial problem sets. We welcome submissions centered around these topics and applied to all areas of financial services, products, and regulations.