Artificial intelligence (AI) is rapidly transforming our world in ways that are hard to understand. Legislators are doing their best to keep up and create regulation that allows us to make the most of technological advances while minimising the risks.
The EU AI Act, the world’s first comprehensive AI law, entered into force in August 2024. For the most part, it will be applicable from August 2026 onwards. Transition timelines vary between 24–36 months, except for the application of prohibitions on certain AI systems, which start to apply already in February 2025, and rules for General-Purpose AI models, which start to apply in August 2025.
The act aims to ensure that AI systems in the EU are safe and respect fundamental rights. It also aims to encourage a single market for AI. It follows a risk-based approach, categorising different types of AI systems according to risk. The higher the risk, the stricter the rules.
I cannot imagine any reasonable person protesting the aims of the AI regulation. The goals of the act are supportable also from the financial sector’s point of view. But what is problematic is the immense amount of digital and data regulation adopted by the previous Commission and the ambiguity contained in the new regulation. It is imperative that the authorities issue clarifying guidance on these matters.
The financial sector has used AI for various purposes for quite some time now. AI enables the collection and use of data at lightning speed, allowing financial sector companies to increase the efficiency of a number of their processes. AI can also facilitate customer service, tailor solutions that best serve customer needs and direct customers to the appropriate service channels, resulting in faster and better service.
======
If the act’s definitions are interpreted unreasonably broadly, even the spreadsheet application Excel could be construed as an AI system when used by financial sector companies and other market participants.
======
At the moment, however, the financial sector is struggling to determine which activities and software solutions fall under the AI Act’s definition of AI and high-risk AI systems. For example, it is unclear whether certain statistical tools should be considered AI systems when used in the provision of financial sector products and services. If the act’s definitions are interpreted unreasonably broadly, even Microsoft Excel, the spreadsheet application we are all familiar with, could be construed as an AI system when used by financial sector companies and other market participants.
Under the AI Act, many key activities in the financial sector fall in the high-risk category. These include for example AI-based creditworthiness assessments in banking and pricing and risk assessments in insurance. High-risk AI systems must meet strict criteria related to logging capabilities, transparency and human oversight.
The problem is that the AI Act’s requirements are partially overlapping with other financial sector regulation. The benefits of AI will be reversed if companies must fulfil the same reporting obligations twice but are prohibited from re-using already conducted reports, impact assessments and documentation to meet the overlapping obligations.
On a more positive note, the draft of the government proposal that has just entered the consultation phase designates the Finnish Financial Supervisory Authority (FIN-FSA) and the Data Protection Ombudsman as the primary market supervisory authorities for the financial sector. These two authorities are responsible for supervising financial sector companies when the placing on the market, deployment or use of AI systems is directly related to the provision of financial services. The FIN-FSA’s expert knowledge in the financial sector’s business models and operations is vital in the supervision of AI use. Thanks to its expertise in the management of operational risks, for example, the FIN-FSA is the right supervisory authority for financial sector companies. Finance Finland hopes that the FIN-FSA will be granted the necessary resources for supervision.
My advice to EU policymakers is that instead of generating new regulation, the new Commission should focus on resolving discrepancies and ambiguities in the implementation and application of current regulation. That alone is a massive undertaking. Weeding out ambiguities in current regulation and creating more sensible definitions would be the best way to achieve the aim of the AI Act: making sure that AI is a good servant to customers, financial sector companies and society alike.
Still have questions?
|Contact the columnist
Looking for more?
Other articles on the topic