The International Organization of Securities Commissions (IOSCO) is the global body that aims to unite the world's securities regulators by promoting harmonized standards for the financial services sector. Its September 2021 report provides guidance to assist IOSCO members (such as the FCA in the UK) in supervising market intermediaries and asset managers that utilise Artificial Intelligence (AI) and Machine Learning (ML).
AI and ML are being increasingly used in financial services, due to a combination of “increased data availability and computing power”. The IOSCO predicts that the use of AI and ML by market intermediaries and asset managers may alter firms’ business models in the future.
Market intermediaries are using AI and ML in “advisory and support services, risk management, client identification and monitoring, selection of trading algorithm and asset management/ portfolio management”. Capital One developed its in-house intelligent assistant ‘Eno’, using its own natural language processing, to enhance its customer experience.
Use of the technology by asset managers is still developing and mainly used to support human decision-making by portfolio management optimisation, suggesting investment recommendations and improving research capabilities. Blackrock has reported on how asset managers have developed AI and ML tools “to compile, cleanse, and analyse the universe of data available, including analyst reports, macroeconomic data (e.g., GDP growth, unemployment) as well as newer “alternative” data sources (e.g. GPS data)”.
While the IOSCO recognises the potential performance benefits that firms will face by adopting AI and ML, it is wary of the risks to firms and consumers alike. The Report suggests that regulators should require firms to:
- Ensure senior management oversight of the development, testing, deployment, monitoring and controls of AI and ML. This could include drawing up internal governance frameworks;
- Test and monitor their algorithms continuously in a non-live environment;
- Retain adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML the firms use, without having to rely on third-party providers;
- Understand their reliance on and manage their relationship with third-party providers and perform initial and ongoing due diligence to understand the scope of work of outsourced functions;
- Disclose meaningful information to customers, clients and Regulators around their use of AI and ML; and
- Ensure that the data used for AI and ML is of sufficient quality to avoid biases and guarantee proper application of AI and ML.
What has already been done in the UK?
The UK regulatory and academic community has already been assessing the impact of the use of data and advanced analytics in financial services. The FCA provided guidance on algorithmic trading, recommending that firms develop trade controls to monitor, identify and reduce potential trading risks and encouraged firms to maintain an appropriate governance and oversight framework.
The FCA also published a joint survey report with the Bank of England to better understand the current use of ML in financial services. The results of this survey showed that regulatory action would be needed in model risk management, as some respondents cited this as a specific constraint to the deployment of ML. There were also calls to action regarding software and data validation, and the governance, resilience and security of ML applications.
More recently, the FCA published an insight article in collaboration with the Alan Turing Institute on AI transparency in financial markets. The article presents an initial framework; and highlights the importance of transparency in demonstrating the trustworthiness of AI and its uses, allowing customers to understand and challenge the basis of AI outcomes and allowing customers to make informed choices about their behaviour.
What potential risks have been identified by the FCA?
In its Call for Input published last year, the FCA identified issues requiring further investigation as to their impact on competition and market integrity. Some issues identified, and not highlighted by the IOSCO Report, include:
- Changing business models – firms moving from providing wholesale financial services to focusing on monetising their data by supplying it to others – this could change the structure of and competition in financial services.
- Competition in the supply of data – if only a limited number of third-party providers sell certain types of data sets for application in ML or AI, the suppliers of data may not have reason to price competitively.
- Market stability – as firms use technology, such as third-party algorithms, in their portfolio management processes, more portfolios will be managed in a similar manner and large groups of clients will be exposed to identical risks. This ‘herding behaviour’ could potentially lead to overreliance on certain tools which would influence trading.
The FCA appreciates that although responding to these potential harms will promote competition in the interest of consumers, regulation could negatively influence innovation and create barriers to entry for AI firms. As a result, the FCA is taking a consultatory approach to regulation in this area. Firms that use AI should keep a close eye out for any future action taken by the regulator in response to the IOSCO report.
The use of this technology by market intermediaries and asset managers may create significant efficiencies and benefits for firms and investors, including increasing execution speed and reducing the cost of investment services. However, this use may also create or amplify certain risks, which could potentially have an impact on the efficiency of financial markets and could result in consumer harm. The use of, and the controls surrounding, AI and ML within financial markets is, therefore, a current focus for regulators across the globe.