AI Act impact in the financial sector: the road ahead

As the EU’s AI Act is gradually coming into force, the financial sector is gearing up to adapt its approach to artificial intelligence. How can financial institutions ensure that their AI systems are safe and transparent, and how will the roles and responsibilities of compliance teams evolve? A look into the legislation’s implications and our predictions for the future.

AI Act european commission

AI act in a nutshell

The EU AI Act represents the world’s first comprehensive regulatory framework for AI, establishing key principles for the use of artificial intelligence by EU-based organisations. Its main aim is to ensure that AI systems are safe, transparent and respect human rights, including privacy and non-discrimination. The legislation formally took effect in August 2024, with most of its provisions set to fully kick in by 2026.

Stricter standards, broader compliance implications


The AI act has major implications for financial institutions, as the sector increasingly depends on AI systems that the European Commissions classifies as 'high-risk', such as those used in risk assessment procedures and Anti Money-Laundering (AML) processes. The requirements for these high-risk AI use cases are strict, and they include rigorous standards for both data quality and governance. Banks and brokers must devise risk classification procedures and subsequent risk management procedures for their AI models, for example. Monitoring and incident reporting is another new factor they have to take into account. The true impact on financial institutions, however, are much more far-reaching than what the regulation initially suggests. Here's our take.

How financial institutions can future-proof themselves for the AI act

A new, hybrid approach, incorporating AI support across teams, is likely to redefine traditional compliance roles and drive organisational restructuring. How should financial institutions effectively integrate AI into their compliance processes, and what steps can they take to stay ahead of the requirements set by the AI Act? Here are two trends:

1. Increased use of AI by regulatory authorities

We’re expecting regulatory authorities in EU member states to increasingly use AI in their own investigations of financial institutions. For instance, anomaly detection already plays a substantial role in identifying suspicious transactions. Banks and brokers need to be ready for this shift and ensure they have a robust system in place for managing dossiers. Take client risk estimates, for example: AI audits can greatly improve them, but only if the correct risk parameters are applied. Filtering thousands of files daily while using incorrect or non-AI Act-compliant risk parameters could easily be flagged by a regulator's own AI-assisted investigation in the near future.

2. Internal quality control

Quality Control is another key area where banks and brokers can use AI to their advantage – if they play their cards right. As the third line of defence in financial institutions, internal audits and checks can be used strategically, but the field is evolving rapidly and will continue to do so. Risk assessments will increasingly rely on AI systems, allowing for near-full automation of the vast majority of dossiers. Artificial intelligence will do much of the heavy lifting, changing the role of human employees in the process. Instead of reviewing every case, team members will focus more deeply on a smaller number of high-risk files, while also evaluating the AI's performance on the more straightforward cases.

Moving towards a hybrid way of AI risk assessment

The examples above bring us to a key aspect of the AI Act: financial institutions must continuously evaluate the performance of their AI systems, which redefines the role of compliance teams. They will have an important role to play in comparing their AI models with their more traditional, rule-based decision-making models. How should banks respond if their AI models start hallucinating and produce unexpected results, or how to handle false positives? What metrics should they use to evaluate and adjust their models?

This shift will lead to more interaction between traditional decision-making models and AI. To support this transition, new roles will emerge, with humans determining where AI fits into their overall compliance toolkit and ensuring that artificial intelligence systems align with their compliance frameworks transparently, using the right parameters

How Harmoney helps financial institutions

At Harmoney, we deliver the capabilities that compliance teams need to meet these new challenges. Harmoney’s workflow engine provides all you need to combine AI-driven decisions with manual checks. While AI validation will increase their workload slightly with new types of validations, this can be solved by automating parts of your operational workflows, in turn allowing you to scale your validation efforts. With our Harmoney Platform as their organisational backbone, banks and brokers can achieve AI Act compliance and address their compliance needs more quickly, with minimal investment and setup time. Our customisable risk models and parameters can be adapted to each company's unique policies and processes, offering the flexibility to manage multiple workflows and review processes simultaneously. Additional integrations with leading specialists (like SymphonyAI’s NetReveal or Discai for transaction monitoring) ensure full compliance for complete peace of mind.

Harmoney offers a cutting-edge digital platform that streamlines intricate onboarding and compliance procedures, featuring automated screening functionalities. Interested in discovering more about our innovative solution? Reach out to us for further details!