November Global Regulatory Brief: Digital finance | Insights

November Global Regulatory Brief: Digital finance | Insights

SEC 2025 examination priorities to focus on AI use by brokers and advisers

The U.S. Securities and Exchange Commission (“SEC”) released its Fiscal Year (FY) 2025 examination priorities, outlining which areas the Commission will focus on when conducting examinations of firms for compliance with SEC rules and regulations. 

Scope: In FY2025, the Commission highlighted the use of artificial intelligence by investment advisers, brokers, clearing agencies and other financial firms subject to SEC oversight as a priority for examination. The Commission plans to review, among other things:

  • registrant representations regarding their AI capabilities or AI use for accuracy;
  • whether firms have implemented adequate policies and procedures to monitor and/or supervise their use of AI, including for tasks related to fraud prevention and detection, back-office operations, anti-money laundering (AML), and trading functions, as applicable; and
  • integration of regulatory technology to automate internal processes and optimize efficiencies.

Important context: AI was a priority for the Commission in FY2024, however the FY2025 report is more detailed on the focus the Commission will take and also comes with the backdrop of consistent enforcement actions to address AI washing. In addition to AI, the FY2025 priorities also include plans to examine registrants’ cybersecurity procedures and practices.

SEC charges firms for AI washing

The SEC announced charges against Rimar Capital USA, Inc., Rimar Capital, LLC, Itai Liptz, and Clifford Boro for “making false and misleading statements about Rimar LLC’s purported use of artificial intelligence, or AI, to perform automated trading for client accounts and numerous other material misrepresentations.”

Important context: This action is the latest in a series of enforcement actions taken by the Commission against firms they allege have misrepresented their use of AI. Without admitting or denying the Commission’s findings, Liptz consented to pay disgorgement and prejudgment interest totaling $213,611, to pay a $250,000 civil penalty, and to be subject to an investment company prohibition and associational bar with the right to reapply in five years. Boro agreed to pay a $60,000 civil penalty. Rimar LLC consented to be censured.

Commerce Department proposes rule requiring AI developers to disclose cybersecurity protections

The Bureau of Industry and Security (BIS) within the Department of Commerce has issued a Notice of Proposed Rulemaking which outlines new mandatory reporting requirements for developers of AI models and computing clusters. 

In summary: Specifically aimed at developers of dual use foundation models that meet the computational threshold set out in President Biden’s 2020 Executive Order on AI, the proposal would require developers to submit detailed reporting to the federal government on a quarterly basis. 

  • In particular, reporting would be required on developmental activities, cybersecurity measures, and outcomes from red-teaming efforts, amongst other items. 
  • Comments are due by October 11, 2024.

FCA launches AI Lab

The UK Financial Conduct Authority has launched the AI Lab, a new initiative to help firms overcome challenges in building and implementing AI solutions and to inform the UK’s developing regulatory environment on AI in finance services. 

In summary: The AI Lab formally adds an AI focus to the FCA’s innovation services and will help drive the safe and responsible use of AI in UK financial markets. Specifically it is intended to provide insights, case studies to help inform the regulatory approach in a practical and collaborative manner. 

Key aspects: The AI Lab will consist of four components-

  • AI Spotlight: The AI Spotlight will provide a space for firms and innovators to share real-world examples of how they are leveraging AI, and to share emerging AI solutions that will lead to industry growth. The AI Spotlight will be hosted on the Innovation Services FCA portal, offering a valuable repository of practical solutions that showcase applications of AI across specific themes. Those accepted will also take part in a Showcase Day at the FCA London office on 28 January 2025. 
  • AI Sprint: The AI Policy Sprint will bring together stakeholders to focus on how to enable the safe adoption of AI in financial services and the inaugural AI Sprint will be hosted in January 2025. 
  • AI Input Zone: Stakeholders are invited to have their say on the future of AI in UK financial services, including the FCA’s regulatory approach, through an online feedback platform that will soon open. The FCA want to hear views on the most transformative use cases, how the current framework works and how the FCA may need to adapt in the future. 
  • Supercharged Sandbox: Looking ahead, the FCA will run AI-focused TechSprints and enhance the Digital Sandbox infrastructure through greater computing power, enriched datasets and increased AI testing capabilities.

Japan to develop guidelines on quantum computing for financial services

The Japan Financial Services Authority (JFSA) established a Working Group on Post-Quantum Cryptography for Financial Services to study the potential impact and risks of quantum computing on financial services. 

In summary: The JFSA noted the importance for financial services to preemptively assess the impact that quantum computing could have on the financial sector, and to propose measures to ‘quantum proof’ the financial sector. 

  • The Working Group comprised of representatives from banks and regulatory organisations, such as Mizuho Financial Group, Nagoya Bank, Shizuoka Bank, Bank of Japan, NPO Japan Network Security Association. 
  • Cybersecurity experts from the Centre for Financial Information Systems and the Cabinet Secretariat of Incident Readiness and Strategy for Cybersecurity participated as observers.

Key features of guidelines: The Working Group noted that quantum computers were not expected until 2030, but the transition period to post-quantum cryptography could take up to 10-20 years. 

  • It is therefore necessary to start planning and implementing measures early. 
  • The Working Group also highlighted the need for Japanese financial institutions to develop an inventory of information assets that have to be protected. 
  • The guidelines to ‘quantum proof’ the financial sector should outline issues that financial institutions should address individually, as well as those that needed to be addressed as an industry. 

Closely related: The guidelines would take reference from standards developed by the US National Institute of Standards and Technology (NIST).

HKMA issues circular regarding risk associated with third-party IT solutions

The Hong Kong Monetary Authority (HKMA) has issued a circular on the risks associated with third-party IT solutions, reminding Authorized Institutions (AIs) to ensure that adequate measures are put in place to effectively manage third-party dependencies and enhance operational resilience against the failure of third-party IT solutions.

In more detail: In addition to the principles and guidance provided in the HKMA’s supervisory policy manual modules, cyber risk assessment framework (C-RAF), and circulars on third-party risk management, the HKMA expects the senior management of AIs to ensure that their institutions take into account certain good industry practices when reviewing and enhancing their risk management controls.

The good practices (set out in the Annex to the circular) cover the following subject areas:

  • reviewing and enhancing third-party risk assessment processes
  • evaluating software update scheduling and monitoring processes
  • implementing testing and rollback procedures
  • adopting gradual deployment strategies
  • managing privileged access
  • defining communication and escalating protocol for large scale outage of common IT infrastructure
  • identifying critical interdependencies; and enhancing the robustness of system backups.

What’s next? While there are no specific next steps noted in the circular, but the HKMA will continue to monitor for good practices in its normal course of supervision of AIs.

European Union adopts final rules on major ICT incident reporting, ICT third-party vendor oversight under DORA

The EU Commission adopted the final draft ESAs regulatory and implementing technical standards (RTS/ITS) related to major ICT incident reporting under DORA.

  • RTS on major ICT-related incidents reporting and notification of significant cyber threats (DORA Art. 20a) – see here
  • ITS on harmonisation of reporting templates and standard forms for firms to report a major ICT-related incident and to notify a significant cyber threat (DORA Art. 20b) – see here

In more detail: The rules specify the content, format, templates and timelines of the reports that EU financial entities need to provide to competent authorities in case of major ICT-related incidents as well as for the voluntary notification of significant cyber threats. 

  • Separately, the EU Commission also adopted the ESAs final draft RTS on the harmonization of the conditions enabling the conduct of the oversight activities concerning (ICT) third-party service providers (DORA art. 41) – see here
  • The rules specify the content and format of the information which will have to be provided by CTPPs to the Lead Overseer and the the details of the competent authorities’ assessment of the measures taken by CTPPs based on the recommendations of the Lead Overseer.

Context: The rules were part of the second batch of ESAs Level 2 policy measures published in July 2024. The EU Commission has to endorse the draft rules published by the ESAs in order for them to become applicable in the EU. 

Next steps: The texts will now go through a final scrutiny period before publication in the EU Official Journal. They will undergo the following steps: 

  • The RTS on incident reporting and oversight harmonisation will have to undergo a three-month scrutiny period by the EU Parliament and Council – if neither institution objects, they will be published in the Official Journal of the EU (OJEU) and enter into force.
  • The ITS on the reporting templates are set to be approved by an expert group composed of EU countries’ representatives and will enter into force following the EU Official Journal publication.

Go-live: The DORA rules are set to apply in the EU from January 17, 2025.

Digital securities sandbox opens for applications

The Bank of England and the Financial Conduct Authority have opened the Digital Securities Sandbox (DSS) for applications. 

Structure of the DSS: The activities in scope of the DSS allow for a firm to: (1) perform notary, maintenance and/or settlement activities; (2) operate a trading venue; or (3) combine both into a hybrid entity.

  • The DSS is composed of different stages of permitted activity and firms will pass through the initial application stage to operating in a possible new permanent regime outside of it. 
  • The regulators have provided detailed guidance on each stage of the DSS. 
  • Sandbox entrants must ensure that the use of new technologies does not compromise the high standards required of FMIs in terms of resilience and data protection.

The intention: The DSS aims to allows firms to experiment with new and different technological innovations in the issuance, trading and settlement of securities by allowing firms to operate under a temporarily modified legal and regulatory framework. 

  • The DSS concerns traditional financial markets such as equities, corporate and government bonds, money market instruments such as commercial paper and certificate of deposits, units in collective investment undertakings (fund units) and emissions allowances. 
  • The trading and settlement of derivative contracts and of ‘unbacked cryptoassets’ such as Bitcoin are not in the scope of the DSS. 
  • The DSS is also designed to promote quicker and more effective regulatory change as regulators can observe activity and consider whether policy changes are required. 

Context: The DSS is the first Financial Market Infrastructure (FMI) sandbox created under the FMI sandbox powers conferred on HM Treasury (HMT) by the Financial Services and Markets Act (FSMA) 2023. FMI sandboxes allow firms to experiment with new or different practices and developing technology in the key functions of FMI.

RBA considers impact of AI on financial stability

The Reserve Bank of Australia has published analysis of the impact of artificial intelligence (AI) on the financial system and its implications for financial stability.

AI adoption – supply side: On the supply side, the RBA finds that advancements in AI capabilities and access have played a crucial role in its adoption.

  • Continuous improvements in AI tools and computational power have made AI more accessible and effective for financial institutions.
  • Additionally, the increased availability of large data sources and improved IT infrastructure, such as cloud computing, have reduced the barriers to adopting AI, making it easier for financial institutions to integrate AI into their operations.

AI adoption – demand side: On the demand side, the RBA finds that the adoption of AI offers opportunities to enhance profitability through revenue generation, cost reduction and increased productivity.

  • Competitive pressures to innovate and stay ahead in an increasingly digital landscape have encouraged financial institutions to explore use cases for AI.
  • Customers expect personalised services, faster transactions and greater protection from scams and cyber-attacks – all of which can be supported by AI.
  • Additionally, AI tools can assist in regulatory compliance, such as meeting anti-money laundering (AML) and know-your-customer (KYC) requirements, and contribute to risk management frameworks, by identifying patterns and predicting potential risks, among other things.

Key benefits: The RBA finds that AI has improved efficiency and productivity across both the back- and front-office operations. 

  • Notable applications include assessing borrower credit worthiness, trade execution, transaction monitoring, code improvement, and review of lengthy documents against specific criteria.
  • The RBA also finds that there are some applications that can enhance financial stability such as carefully designed algorithms that improve financial firms’ operational efficiency, risk management, and regulatory compliance.

Key risks: The RBA finds that AI can contribute to financial system vulnerabilities and has identified four types of risk

  • Operational risk from concentration of service providers – Increasing reliance on a small number of AI and related third-party service providers can create vulnerabilities due to a single point of failure.
  • Herd behaviour and market correlation – The increased use of AI coupled with limited diversification of providers, models and data sources, may lead to higher correlation within markets which could exacerbate herd behaviour and aggravate the transmission of shocks.
  • Increased cyber threats – Advances in AI have already increased the number and sophistication of cybersecurity threats and cyber-attacks that could significantly disrupt the financial system by amplifying volatility and increasing funding and liquidity vulnerabilities.
  • Risks around models, data and governance – AI models are typically complex and opaque and there are concerns that mistakes and ‘hallucinations’ could create false realities with widespread market influence.

Existing regulatory framework for AI: Australian financial sector regulators will continue to rely on the existing regulatory frameworks which are high-level, principles-based and technology neutral.

  • If concerns are to arise that cannot be addressed under these frameworks then targeted initiatives may need to be considered.
  • Following the launch of the consultation on Safe and Responsible AI in Australia, the Government announced in January 2024 that it was considering introducing mandatory guardrails to promote the safe design, development and deployment of AI systems through the economy.

HKMA invites banks to apply for generative AI (GenAI) sandbox participation

The Hong Kong Monetary Authority (HKMA) is inviting banks to apply for participation in its new Generative Artificial Intelligence (GenAI) Sandbox, which was launched by the HKMA in conjunction with Hong Kong Cyberport last month, to enable banks to pilot GenAI use cases within a risk-managed framework.

In more detail: The HKMA said it encourages banks to explore a diverse range of AI implementation, including those focused on “Retrieval-Augmented Generation, model adaptation, fine-tuning of pre-trained models or training of new models”. The sandbox trials are expected to leverage advanced AI, including GenAI, models designed for real-time interaction, domain-specific assessment, decision-making support or predictive analytics, with a specific focus on three areas:

  • enhancing risk management, e.g. creditworthiness evaluations, financial statement analysis, risk assessment report generation
  • anti-fraud measures, e.g. deepfake detection/prevention, fraudulent message identification and response
  • customer experience, e.g. chatbots that can generate personalized responses based on customer background, past interactions

Next steps: Applications will be accepted until 15 November, with selected projects looking to completed within six months from December this year. Based on the results of the sandbox trials, the HKMA will share good practices and consider the need for developing further supervisory guidance on the adoption of AI.

link

Leave a Reply

Your email address will not be published. Required fields are marked *