Upcoming supervisory focus? EBA publishes Final Report on Big Data and Advanced Analytics

On 13 January 2020, the EBA published its Report on Big Data and Advanced Analytics (EBA/REP/2020/01) (“BD&AA”) in the financial sector. Legal basis is article 9 para. 2 of the Regulation (EU) 1093/2010, mandating the EBA to “monitor new […] financial activities and […] adopt guidelines and recommendations with a view to promoting the safety and soundness of markets and convergence of regulatory practice.” The Report is in line with EBA’s Fintech Roadmap dated 15 March 2018, in which it had announced that it would observe current developments in financial technology more closely and would, in a first step, create the knowledge base needed for future regulatory work on this matter. Back in 2016, the Joint Committee of the European Supervisory Authorities (EBA, ESMA and EIOPA) had already addressed the topic of BD&AA in a Discussion Paper (JC 2016 86) as well as in its subsequent Final Report (JC/2018/04) from 2018. In other words: The European Supervisory Authorities clearly have BD&AA on their radar.

Big Data and Advanced Analytics in the financial sector

Considering the chances and risks associated with BD&AA in the financial sector, the supervisory authorities’ newly found interest does not come as a surprise. The terms “Big Data” and “Advanced Analytics” are generally used to describe technologies for the rapid analysis of large amounts of data. Complex algorithms are employed to process data collections which were previously considered too large or too unstructured for practical analysis.

There are numerous examples for the application of such technologies, although the extent of their usage strongly varies between different areas. As one would expect, BD&AA technologies are commonly used in the insurance industry for a continuous risk assessment. Automated investment brokerage and advice (so-called “Robo Advisory”) is another field of application. In high-frequency trading, the use of BD&AA technologies by now is all but universal. The latter may also serve as an example for a particularly relevant part of “Advanced Analytics”, which underwent a rapid development in recent years: “Artificial Intelligence” and “Machine Learning” (“AI/ML”). AI/ML technologies enable the system to autonomously “learn” to infer highly-complex connections within relatively unstructured data collections. The result and the main challenge are a continuously growing complexity of the algorithm – up to the point of incomprehensibility even for the creators of the algorithm (so-called “black box”).

Consequently, recommendations or decisions based on the algorithm are becoming increasingly difficult to reconstruct. The limit of human supervising capabilities is the inherent risk of such technologies. In high-frequency trading, where a majority of providers already use AI/ML technologies, this risk is quite obvious: Due to inherent time constraints, decisions by the algorithm to buy or sell may only be checked retrospectively with a considerable delay. As a result, malfunctioning artificial intelligence can cause considerable damage before any manual intervention is possible.

The Final Report of the EBA and existing regulation of BD&AA

In line with the approach described in its “Roadmap”, the EBA has not yet issued any recommendations or even guidelines on how national supervisory authorities should deal with BD&AA. The EBA’s report rather seeks to share knowledge of the current use of BD&AA with national supervisory authorities (and other interested parties) and to identify areas for future regulation. Nevertheless, companies using or planning to use BD&AA technologies should follow this development closely: A new focus of supervision is about to emerge!

After the EBA assessed the current use of BD&AA in the financial sector, its report highlights four key pillars of BD&AA which ought to be considered by companies when developing or implementing BD&AA technologies. These four pillars are:

  • Data management: Which data is processed and how is the data processed?
  • Technological infrastructure: Which hard- and software is used for collecting, storing and evaluating data?
  • Organization and governance: Who is authorized to influence the processing of data/ the algorithm? Is the staff trained accordingly?
  • Analytics methodology: What rules are used to collect and process data? How are the results used in practice?

The EBA identifies the trustworthiness of business from the perspective of contractual partners and the market as a whole to be a main concern associated with the use of BD&AA, especially in combination with AI/ML technologies. In order to ensure that such trustworthiness can be maintained, companies using BD&AA are called upon to observe a number of different aspects (“elements of trust”), some of which stretch across the abovementioned “pillars” of the BD&AA implementation. These elements of trust emphasised by the EBA include:

  • Explainability: Can the functionality of the system be explained to people (not only, but particularly within the supervisory authorities)?
  • Traceability: Can a recommendation or decision of the system be traced back to identify the key factors for it?
  • Data protection: Are data protection requirements observed in all four pillars of BD&AA system?
  • Consumer protection: Have consumer protection requirements been implemented in the BD&AA system?
  • Security: What security measures are taken against external attacks on the system?
  • Avoidance of discrimination: How does the system avoid discrimination of (some of its) users, especially when dealing with consumers?
  • Ethics: Are ethical implications of BD&AA usage understood and are these issues satisfactorily addressed? A “High-Level Expert Group on AI” appointed by the European Commission has already expressed its views on this in its Ethics Guidelines for Trustworthy AI on 8 April 2019. These include points already mentioned and also draw attention to possible social and environmental consequences of using artificial intelligence.

Apart from purely ethical considerations, these “elements of trust” are not only to be observed in order to maintain trust in businesses operating within the financial sector, but likely foreshadow future supervisory priorities. Users and developers of BD&AA technologies should be aware that to a large extent, the points mentioned above already are actual legal requirements, which – even though they may have lacked supervisory attention so far – can in principle be monitored by competent national supervisory authorities as of today.

To name just a few examples:

Supervisory law

  • Credit institutions (sec. 25a para. 1 clause 1, clause 3 no. 3 German Banking Act – Kreditwesengesetz or “KWG”), Alternative Investment Fund Managers (sec. 28 para. 1 German Investment Code – Kapitalanlagegesetzbuch or “KAGB”), investment services enterprises (sec. 80 para. 1 sent. 1 German Securities Trading Act – Wertpapierhandelsgesetz or “WpHG”) and payment institutions (sec. 27 para. 1 clause 1, clause 2 no. 1 German Payment Services Supervision Act – Zahlungsdiensteaufsichtsgesetz or “ZAG”) are all legally required to have a proper business organisation, including appropriate internal control mechanisms for the fulfilment of legal obligations. This requirement has deliberately been phrased broadly in each case and also covers legal compliance of any BD&AA technologies used. Through such “general clauses”, an extensive supervision of the use of BD&AA by the German Federal Financial Supervision Authority (Bundesanstalt für Finanzdienstleistungsaufsicht or “BaFin”) would already be possible today.

  • Payment institutions, electronic money institutions and credit institutions are legally required to establish, maintain and apply risk mitigation measures and control mechanisms to manage operational and security risks associated with the payment services provided by them (sec. 53 para. 1 ZAG). Insofar as BD&AA systems are used for this purpose (e.g. in fraud detection), BaFin is competent to check the legal compliance of such systems and may, for instance, demand a verifiable explanation of their functionalities.

  • Investment services enterprises trading in financial instruments (shares, bonds, warrants, derivatives, etc.) on the basis of an algorithm within the meaning of sec. 80 para. 2 WpHG are required to ensure that their system has appropriate trading thresholds and trading limits, that market disturbances are avoided and that it cannot be used for a purpose contravening European and national regulations against market abuse. Users of such algorithms, which are often created using AI/ML technologies, must ensure compliance with these requirements and, upon inquiry by BaFin, have to provide verifiable evidence of their compliance.

  • Investment services enterprises within the meaning of sec. 1 para. 10 WpHG in principle have to adhere to sec. 63 et seq. WpHG, stipulating certain informational requirements and duties of conduct. Providers of “Robo Advisors” operating with highly complex algorithms developed using BD&AA technologies, however, might find it challenging to ensure that investment recommendations are always based on customer needs (and to provide verifiable proof of this upon request by BaFin), as required by sec. 63 para. 5 WpHG. Additionally, investment recommendations must also be explained by the investment services enterprise in a way that is understandable to the customer (sec. 64 para. 4 WpHG). This raises the question, to what extent, if at all, the functionality of the “Robo Advisor” should be addressed in such an explanation.

Data Protection:

  • Personal data may only be “collected for specified, explicit and legitimate purposes” and processed “in a transparent manner in relation to the data subject” (art. 5 para. 1 lit. a and b Regulation (EU) 2016/679 – General Data Protection Regulation or “GDPR”). Depending on the interpretation of this general transparency requirement in data protection law, implementation may pose particular difficulties for market participants whose systems are processing personal data using complex algorithms developed with AI/ML technologies.

  • Data processing connected to automated decision making (e.g. in the form of automated securities investments) is subject to special regulation in art. 15 para. 1 lit. h GDPR. According to this provision, the respective data subjects are entitled to obtain “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing”. Once again, this raises questions for users of BD&AA technologies. To what extent, if at all, should specific functionalities of the algorithm be addressed in the information to be provided? When is a statement in this regard considered to be “meaningful” for the data subject?

  • Art. 22 para. 1 GDPR may well be a significant restriction of the use of BD&AA. According to this provision, a data subject has “the right not to be subjected to a decision based solely on automated processing (…) which produces legal effects concerning him or her or similarly significantly affects him or her”. Here, data protection law sets overall limits to the use of automated systems processing personal data. Users of such systems ought to be prepared to enable affected data subjects to enforce this right.

Consumer Protection:

  • Sec. 505b para. 2 of the German Civil Code (Bürgerliches Gesetzbuch – “BGB”) requires lenders of real estate consumer loans to appropriately take into account all factors relevant to the assessment of the creditworthiness of the borrower. In order to enable the consumer to prove errors in this assessment, sec. 505b para. 4 BGB obliges the lender to document procedures for and information on the creditworthiness assessment. This begs the question to what extent, if at all, such documentation also has to address basic functionalities of algorithms used to evaluate the consumer’s creditworthiness.

  • Finally, the General Equal Treatment Act (Allgemeines Gleichbehandlungsgesetz – “AGG”) may be relevant for some users of BD&AA technologies. Sec. 19 para. 1 no. 2 AGG, for instance, prohibits “discrimination on the grounds of race or ethnic origin, sex, religion, disability, age or sexual orientation” when concluding private-law insurance contracts. If a customer is able to produce “facts from which it may be presumed that there has been discrimination”, it is up to the insurance company to prove that no provisions prohibiting discrimination have been violated (sec. 22 AGG). Providing such evidence may require insurance companies to go into some detail regarding algorithms used to decide whether (and to which conditions) an insurance contract should be offered.

By these few examples alone, it is apparent that many of the “elements of trust” mentioned in the EBA report are in fact, in one form or another, already mandatory legal requirements. Explainability, traceability and security of the BD&AA system used could already be checked by BaFin as components of a “proper business organisation” or as a measure “for the control of operational and security risks” (sec. 53 para. 1 ZAG). The GDPR offers several angles to ensure BD&AA systems’ compliance with data protection requirements. Under certain circumstances, such systems can also be subject to judicial review on the basis of the AGG with regard to a possible discrimination of customers. And even ethical considerations listed as an “element of trust”, mainly concerning social and environmental risks of artificial intelligence, will partially become enforceable regulation in the foreseeable future: As soon as the measures described in the BaFin guidance note from 20 December 2019 addressing sustainability risks (see our blog post on this) become mandatory for supervised institutions, they also have to be implemented in any BD&AA systems used (e.g. in securities evaluation).

Outlook

Supervision of the use of BD&AA technologies in the financial sector so far has been rather lax, even though there are sufficient grounds for stricter legal oversight already in place.

The complexity of the issue is likely one key factor for this. In order to supervise the use of BD&AA technologies, they first have to be fully understood. A deep understanding of the technology to be evaluated in each case, however, would require enormous resources on the part of supervisory authorities. It would, on the other hand, pose an obvious risk if supervisory authorities were to solely rely on information provided by the supervised companies themselves on the functionalities of their respective systems. The near future will show whether the EBA and the national supervisory authorities can find a viable middle way.

Another key factor to be considered is economic expediency. An overly strict handling of BD&AA technologies by a single national supervisory authority would currently be a clear disadvantage especially for FinTechs in the respective state. Some of the most promising European FinTechs offer or work on BD&AA solutions – and are often still small and flexible enough to try to escape financial regulation simply by relocating their headquarters. The need for a European regulation of the matter is obvious. Given the already existing provisions governing the use of BD&AA technologies, such European regulation will likely not come in the form of an actual regulation or directive. Instead, EBA guidelines standardising national supervisory authorities’ handling of the existing BD&AA regulatory framework and ensuring a level playing field in the European FinTech industry are likely to be introduced in the medium term. It is impossible to predict the time for such EBA guidelines with more certainty. A single case plainly highlighting risks of the new technology may, as is often the case, expedite the regulatory process.

Users of BD&AA technologies in the financial sector are therefore well advised to consider the regulatory requirements addressed to them early on.

Weitere Artikel