Chaouki BoutharouiteAI Governance and Thought Leadership Manager at AXA GETD (2018-2022)

January 30, 2022

Regulating AI: mission impossible?

AI & Cyber Resilience

Read time:7 minutes

With the acceleration of digitalization and the promises of Artificial Intelligence (AI) fast development, the topic of how to regulate AI usage and adoption has been gaining considerable traction for the past few years. Different jurisdictions around the globe iterate, initiating various initiatives, from white papers, guidances to frameworks and policies. Meanwhile, an increasing number of people, from field experts and researchers to business leaders and politicians, have been debating the best course of action. A proper strategy to regulating AI, some of them argue, could help unlock unlimited value while preventing all risks related to this technology. However, a rational review of the complexity and multiple uses of AI reveals the complexity of this topic and the numerous challenges ahead. Drawing realistic expectations from the regulatory framework might be one of the keys to a sensible approach to regulation.

Download the paper

Download

First off, no one, even among the most fervent AI supporters, denies the need for AI regulation. The reason for this is simple. Artificial Intelligence is already part of our everyday lives. Its uses range from driving cars to diagnosing conditions or approving financial decisions, and its potential seems almost limitless. Globalization means a single flaw in a large-scale AI system could potentially have detrimental consequences for millions of people. As predictions made by algorithms become more and more prevalent in the way companies operate in many industries, the need for trustworthy AI becomes increasingly important. And along with it, legal questions arise - especially so when factoring in the fact that AI systems are prone to bias. Up until today, AI usage is not governed by a dedicated legal framework, but instead by the sum of AI-related elements in existing laws in data privacy, competition, consumer protection, and liability. This last topic constitutes a challenge since AI systems are sometimes required to make decisions by themselves. Liability may be difficult to determine, for instance, in the event of an accident caused by a driverless car.

The challenge of defining regulation

In this context, what form should regulation take when it comes to AI? Regulatory authorities typically have an array of solutions at their disposal, ranging from non-binding guidelines to stricter policies. Which approach would be the best suited for AI: agreeing on a set of key theoretical principles or building a comprehensive regulatory framework? How binding should regulation be, what should its scope and level of concreteness be? One might think, the European Union’s landmark 2018 General Data Protection Regulation paved the way for regulating digital technology and its potential flaws. However, the very nature of AI makes the matter more difficult.

Even though it is based on extremely sophisticated models and is difficult for non-experts to understand, AI is ultimately a tool. When it comes to imagining AI policies, are we talking about regulating the technology itself, its purpose, or its specific applications? Envisioning AI as a single technology instead of considering the variety of its possible applications may result in a one-size-fits-all approach, which might just be one of the most dangerous pitfalls to avoid.

The challenge of combating obsolescence

Even though Artificial Intelligence has been researched since the middle of the 20th century (the term itself was coined in the summer of 1956 at the Dartmouth conference), it is a relatively young research and application field. The technology itself moves very fast, with new technologies being implemented into businesses and brought to scale almost daily. In this context, the ability of regulatory bodies to understand such rapid change – let alone decide on rules and enforce them – is becoming increasingly difficult. This ever-changing landscape, combined with the highly advanced level of the technology itself and its massive impact on customers on a global scale make it a unique topic in its complexity. In this context, one can see how envisioning governance as a set of technical rules might be difficult to enforce and how rapidly evolving tech could make it very difficult for legislators to enforce laws and keep track of the latest developments - or even open regulatory loopholes. Instead, these characteristics might call for a soft policy-based approach with general guidelines focusing on fostering responsible use of AI and sensible data governance.

The challenge of speaking the same language

So far, several stakeholders have started studying this matter and proposing regulation at international (European Commission, OECD,…) and national (Singapore, United Kingdom, USA, China…) levels. The lack of a global AI policy leader so far is a challenge in itself, as multiple initiatives emerge, resulting in operational divergence and standardization issues on a global scale.

Through initiatives such as the proposed Artificial Intelligence Act (published in April 2021), the European Union places itself in a risk-based approach, where AI use cases are categorized into four categories based on the risk they pose to EU citizens’ health, safety or fundamental rights, namely: applications with minimal or no risk, applications with specific transparency requirements, high-risk AI systems that need to be transparent, traceable and guarantee human oversight and applications posing unacceptable risks that are prohibited. Although the proposal still must be reviewed by co-legislators, it could lead to heavy penalties (up to 6% of a company’s worldwide annual turnover).

By comparison, the US and China seem more reluctant to impose any legal obligations. The US approach might be best summed up by Eric Schmidt, Chairman of the US National Security Commission on AI and former CEO of Google who said, the future of AI will not be built by regulation but by investments (2021). Both the White House Office of Management and Budget’s Guidance for Regulation of AI Applications and the AI Principles on Artificial Intelligence, adopted in 2020 by the NAIC (National Association of Insurance Commissioners), are a set of high-level guiding principles that do not introduce stringent obligations to not stifle innovation.

Singapore offers an interesting alternative. Introduced by the Monetary Authority of Singapore (MAS), the Veritas Initiative aims at enabling financial institutions to evaluate their AI-driven solutions against the principles of fairness, ethics, accountability, and transparency. Its philosophy is to provide companies with a clear set of guidelines and exert control while keeping potential penalties to a minimum.

Finding balance

These different approaches also illustrate the pursuit of different goals through regulation. Whereas some countries, such as China or the US, are mainly focused on the acceleration of innovation and competitiveness, others primarily see regulation as a means to ensure the protection of consumers and human rights. This raises a question: is regulating AI ultimately a matter of technology or a matter of values? Finding a balance between mitigating risks by regulation and fostering innovation and creating an ecosystem of trust will be key to determining the development of the AI field and enable both economic success and social progress.

AI has the potential to transform people’s lives for the best by revolutionizing all economic sectors and creating significant efficiencies. But this potential will be unlocked only if tech players, industry experts, researchers and regulatory bodies agree on a common framework and a sensible roadmap. Yes, regulating AI is possible – and it is necessary. In order to be successful, it will need to comply with certain principles: focus on ethics and consumers protection rather than operating on a technical-based approach, operate in an agile way and avoid one-size-fits all guidelines – and most importantly, it should be human-centric.

And tomorrow?

Even though the topic of personal data is significantly different in nature, the relatively successful deployment of GDPR legislation and the spillover effect it had outside of its legal territory suggest that a functioning AI regulatory framework could emerge in Europe in the next 3 to 5 years. It is important to remember that this field is still in its early days and that multiple questions remain at this stage regarding the best regulatory approach. In any case, finding a balance between the need to mitigate risks and the willingness to foster innovation will be key in the further development of the AI field and will need to be reflected in regulation. Important challenges must be overcome along the way, such as achieving collaboration between regulatory bodies around the world around a common roadmap.

3 Questions to Sir David Hardoon

Chief Data & AI Officer, UnionBank Philippines

Q: What is your opinion on regulating AI?

A: Ultimately AI is a technology enabler, which is why I think it is very important to contextualize the term of AI regulation. The idea of regulating AI is a little like regulating chip processors or C++. Of course, there are some contexts where regulation is required. But more importantly, I think what is important is regulating the context where AI is used. There might be a higher bar for policy in sectors such as warfare, economics, or autonomous vehicles. Opening the door to AI regulation leads to many questions: how do we proceed? Do we regulate algorithms? It is all very opaque, which is why I think we need to be very careful.

Q: What would you say have been the main milestones on this front lately?

A: I think it’s actually when things went wrong. Obviously, no one wants for AI systems to create negative outcomes and it should be avoided whenever possible. But each failure we witnessed – such as when we first noticed there was a delta between men and women in terms of credit limit - brought improvements. These events tend to create a bit of a shock to the system of our existing presumptions. This can have a ripple effect on the Fed, the SEC, and ultimately generate a global consultation with the industry. For instance, you are not allowed to collect sensitive data such as gender, religion, or ethnicity as a means to mitigate any disadvantage to sensitive groups. However, in many cases collecting this information is the only way to prevent the risk of a disadvantage to those groups. I don’t think this realization would have occurred unless we were faced with such real-life situations.

Q: You have spent most of your career as an AI specialist in Asia, and more specifically in Singapore. What are the characteristics of this region’s approach?

A: Singapore is in a unique position that enables it to welcome companies from all regions to do business. It is able to act as a broker to different parties and has built up an ecosystem to support that position. The primary approach is one based on anchoring key principles to support companies adopt innovation while aligning with existing regulatory requirements. The 2018 Monetary Authority of Singapore fairness, ethics, accountability, and transparency (FEAT) principles for the adoption of AI in the financial sector is an example of this approach.

Sir David Hardoon

Chief Data & AI Officer, UnionBank Philippines

Artificial Intelligence: Responsible AI and the path to long-term growth

Artificial Intelligence: Responsible AI and the path to long-term growth

Read more
AI - With Greater Power Comes Greater Responsibility

AI - With Greater Power Comes Greater Responsibility

Read more
Can the insurance industry afford to ignore computable contracts?

Can the insurance industry afford to ignore computable contracts?

Read more
Insurance Distribution: Why the future is phygital

Insurance Distribution: Why the future is phygital

Read more