ESMA publishes guidance for AI under MiFID II rules

ESMA warned that firms’ management bodies remain responsible for decisions, whether made by humans or AI.

The European Securities and Markets Authority (ESMA) has provided initial guidance to firms using Artificial Intelligence technologies (AI) when they provide investment services to retail clients.

The EU’s financial markets regulator and supervisor said it expects firms to comply with relevant MiFID II requirements, particularly when it comes to organizational aspects, conduct of business, and their regulatory obligation to act in the best interest of the client.

ESMA reminded firms of the potential uses of AI which would be covered by requirements under MiFID II include customer support, fraud detection, risk management, compliance, and support to firms in the provision of investment advice and portfolio management.

The regulator, however, warned against the inherent risks of utilizing AI in the context of investment services. Risks include:

  • Algorithmic biases and data quality issues;
  • Opaque decision-making by a firm’s staff members;
  • Overreliance on AI by both firms and clients for decision-making; and
  • Privacy and security concerns linked to the collection, storage, and processing of the large amount of data needed by AI systems.

ESMA’s key points about AI under MiFID II

AI has the potential to transform retail investment services by enhancing efficiency, innovation, and decision-making, said ESMA while noting that the technology introduces risks such as algorithmic biases, data quality issues, and lack of transparency. ESMA warned that firms’ management bodies remain responsible for decisions, whether made by humans or AI.

Potential Uses of AI:

Customer Service: AI-powered chatbots and virtual assistants.
Investment Advice and Portfolio Management: Personalized recommendations and risk management based on AI analysis of client data.
Compliance: Summarizing regulations, detecting non-compliance, and preparing reports.
Risk Management: Evaluating investment risks and monitoring portfolio risk.
Fraud Detection: Identifying unusual patterns indicating fraud.
Operational Efficiency: Automating tasks like data entry and report generation.

Risks and Challenges:

Over-reliance on AI: Neglecting human judgment can be risky in unpredictable markets.
Lack of Transparency: AI systems can be “black boxes” with unclear decision-making processes.
Data Privacy and Security: Handling large amounts of data raises privacy concerns.
Bias and Reliability: AI outputs can be incorrect, biased, and influenced by training data quality.

MiFID II Requirements:

Firms must act in clients’ best interests and be transparent about AI’s role in investment decisions.
Management bodies need to understand and oversee AI technologies, ensuring alignment with strategy and compliance.
Effective risk management frameworks should be in place, including testing and monitoring AI systems.
Data used for AI should be relevant, accurate, and comprehensive.
Firms must ensure staff have the knowledge and training to manage AI technologies.

Conduct of Business Requirements:

Ensuring AI systems align with product governance and suitability requirements.
Implementing quality assurance processes, including algorithm testing and stress tests.
Adhering to data protection regulations to safeguard client information.
Maintaining comprehensive records on AI utilization and related client complaints.



Financefeeds.com