Meaning is everything: the problem of defining ethics for AI algorithms

Meaning is everything: the problem of defining ethics for AI algorithms

Developing AI algorithms without strict definitions could create ethical problems for financial firms. To avoid mishandling their algorithms and potentially harming certain customer groups, firms must ensure their AI tools are no broader than the definitions they are based on.

Iga Pilewska ([email protected])

Meaning is everything

An overlap in understanding

Settling on agreed definitions of the many terms that exist in the multiple domains of financial services is notoriously difficult. As globalization continues to create more interconnections within the sector, areas such as fixed income or FinTech are spreading across regions with different understandings of concepts, systems and processes*. Often the best that many institutions can hope for is an overlap in understanding of different terms, rather than separate distinct meanings for each. As highlighted by several examples**, we seem to be seeing a shift from rigid definitions to more open-ended ones that indicate a general understanding of a concept and include qualifiers such as ‘typically’, ‘can’ or ‘in contrast to’.

In parallel, the use of machine learning (ML) and other artificial intelligence (AI) algorithms is on the rise. In 2019, for example, more than 70% of financial institutions (FIs) declared that they use ML for credit scoring and decisioning, compared with about 50% in 2018. AI, like any statistical process, needs mathematical translations not only of definitions of terms but of concepts too – such as equal treatment for all people regardless of age or national origin. The less rigidly a term or concept is defined, the more likely it is that unintentional meanings slip through.

Defining the norm

This creates a complication for FIs. To develop AI algorithms, they will need an ever increasing number of definitions. But how can they ensure they embrace a wider understanding of terms and concepts when developers or data scientists may only have a couple of examples to work with? Another challenge comes from the rapidly changing nature of financial services markets. While new products continue to emerge, there is no joint, shared taxonomy that is being updated as each change occurs.

This problem is particularly hazardous when it comes to ethics. Ideally, decision makers in financial services should follow some sort of ethical code: most people would agree, for example, that customers’ credit limits should not differ by gender.

According to Professor Aaron Roth of the University of Pennsylvania, speaking at a September 2019 ‘AI in Finance Summit’ organized by RE•WORK (re-work.co/events), there are various steps in the decision-making process in which we expect people to obey certain ethical norms. But someone developing or deploying an algorithm may be several steps away from the decision-making process. A firm’s CTO may decide to introduce a new ML algorithm, for example, but the person choosing and organizing the data for it may be part of a separate engineering team. To ensure that norms are obeyed, decision makers must define them and translate them into mathematics by encoding ethical principles directly into the algorithm’s design***. As Professor Roth pointed out, we should be mathematically precise about our definitions and, once we fix a definition, we need to explore trade-offs, such as ethical acceptability versus efficacy.

For example, if an FI requires an algorithm to be gender-neutral, it must ensure that gender, and factors that may act as proxies for gender (such as being a regular reader of Cosmopolitan or a user of aftershave), are not dependent variables for an ML algorithm. If a variable has historically been a significant, positively correlated one, however, prohibiting it might mean that an algorithm’s ‘predictive effectiveness’ could be compromised.

Better an imperfect definition than none at all

In an era of globalization, companies must address cultural and linguistic differences. In such a context, settling on a widely accepted definition of ethical norms is almost impossible. Employees in different regions may have different understandings of certain norms (such as ‘fairness’), and local regulators may expect FIs to respect different principles and ethical standards.

For FIs, one solution is to narrow the context as much as possible. Rather than trying to find a definition for ‘fairness’, for example, firms could establish what ‘fairness’ means in terms of the credit-score assessments for customers in certain regions. Once a definition is in place, developers can enhance it each time they realize it is missing an important element, being careful to consider the trade-offs each time. While this may create a need for some ‘logical gymnastics’ at a governance level, it should help firms to develop more ethically sound ML algorithms.

Fit for purpose

AI algorithms should be fit for purpose not just within the context of the business, but also in terms of local variations of ethical norms. The governance machinations involved could raise important questions – such as how an entity might balance profit and ethical compliance, or whether an international entity has a global code of ethics that takes precedence over its regional businesses. And if it doesn’t, is there a risk that unscrupulous regional entities might allow more unethical choices within their algorithms under the guise of ‘regional requirements’?

In an ideal world, FIs should prioritize ethical compliance over the pursuit of profit. This more far-sighted view can help to nurture positive public opinion, and could actually generate bigger profits longer-term. To ensure that regional entities do not make more unethical choices, we not only need a clear understanding of what ethical choices actually are, but also an understanding of the trade-offs between different choices.

FIs should either have a dedicated ‘data ethics’ team or partner with an entity that seeks to establish an ethical code. They should also fully understand customers’ expectations and concerns around the ethical behavior of AI algorithms. To protect integrity while allowing for diversity, FIs should have core high-level values that form a code of ethics that takes precedence over regional businesses, but which nonetheless allows for regional variations that may develop in response to customers’ expectations.

If FIs want to avoid losing control over their algorithms and discovering that the results of their analysis could harm certain customer groups (albeit unintentionally), AI applications cannot be broader than the definitions they are based on. So rather than settling on what is often the easiest option – an overlap of meaning – firms need to come back to formulating rigid definitions. This will not only encourage FIs to seriously consider the trade-offs inherent in developing algorithms, but also to regularly readjust their thinking to improve the definitions they have. The sooner they start on this journey, the easier it will be to learn from their mistakes.

* In China, for example, where payments are more closely linked to mobile phones, the definition of 'Fintech' products in the context of the mobile industry can include hardware ('security chips that support mobile payments operations'). In the US and Europe the definition tends to include only software.

**Examples of different definitions of 'fixed income': BlackRock, Barclays, Fidelity. Examples of different definitions of 'FinTech': FinTech Weekly, Central Bank of Ireland.

***Roth, A., Kearns, M. (2019). 'The Ethical Algorithm: The Science of Socially Aware Algorithm Design', pp.3-4.

 

Points of View are short articles in which members of the Chartis team express their opinions on relevant topics in the risk technology marketplace. Chartis is a trading name of Infopro Digital Services Limited, whose branded publications consist of the opinions of its research analysts and should not be construed as advice.

If you have any comments or queries on Chartis Points of View, you can email the individual author, or email Chartis at [email protected].