Ethical Considerations Around the Use of AI by Businesses, Part I: Bias

A digital illustration features the symbolic scales of justice, with two humanoid faces in profile, one white and one blue, juxtaposed behind it.

The first instance of the commercial use of AI was in 1980, where a “rules-based” system automated the configuration of computers based on customers’ specifications. That was a case of intelligent automation—“intelligent” because the system did not merely follow rules like a calculator. Fast-forward to the 2020s where your phone can beat any human chess player. Businesses use it for tasks ranging from the automated generation of recommendations for e-shoppers to intelligent assistance in the hiring process—and these are just two of perhaps a thousand ways in which AI can be (and is being) used.

Consider now that there are articles on news sites, posts by independent bloggers, and interviews on YouTube with titles such as:

The Biggest Challenge Facing AI Is an Ethical One

The Ethical Considerations of AI in E-Commerce

How Can Companies Use AI Ethically?

The natural question is why the issue of ethical considerations in this context arises at all: For instance, if a business were to use AI for internal budget development, one wouldn’t expect the issue of ethics to enter the picture.

One answer—but certainly not the only—to why ethics becomes an issue in the deployment of AI systems by businesses is that the output of AI algorithms and, therefore, the recommendations they make can be biased. In essence, the intelligence of an AI-based system is limited by the data on which it has been trained—but that limitation does not merely equate to a low level of competence. It extends to recommendations that are unjustifiable or even ludicrous.

Bias From the Training Data

How exactly can an AI system inherit bias from its training data? For one, biases in the training data may come about if the data comes from sources that do not sufficiently represent the areas in which the system will be used. In particular, the training data might include disproportionately more and fewer samples from different groups of people—based on ethnicity and gender, perhaps.

The hiring process is a topic exquisitely suited to the ideas of ethics and bias. Say a company uses an AI-based hiring system (AIHS) that performs facial analysis on candidates. Would the use of such a system be ethical even if were accurate?

Studies by an MIT researcher and a Stanford graduate student revealed that certain AI systems that could be used for facial analysis, among other uses, had an error rate that was on average 40 times higher for dark-skinned women than for light-skinned men. The authors concluded that this had come about because of skewed training data.

The bias here—in terms of gender as well as race—was not intended, and neither was it expected. But that is the point: Bias on the part of people is obviously unethical, but bias that is inadvertently generated is not obviously unethical. No court of law can say that the people who designed the algorithm should have foreseen that the AIHS would develop such biases.

Bias From… We Don’t Know Where

An AI-based hiring system used by a company is one thing; a system used by governments is another. Facial analysis is one thing; facial recognition should be… simpler? Less prone to bias, perhaps?

A facial recognition system, used by police departments in several countries—and by the US Customs and Border Protection—was shown, in 2017, to exhibit higher error rates in the case of people of certain races and in the case of women. At its worst—that is, when analysing the faces of men of a certain race vis-à-vis women of a different race—the accuracy of the system was lower than its best-case performance by a factor of 10. In 2023, the NIST ranked it the most accurate among such systems, but it is not clear whether its performance gap in terms of race and gender has reduced.

The ethical issue is clear: Many technological systems have an acceptable error rate—but in cases like the security-and-surveillance system described above, its shortcomings bring up ethical issues as much as issues of accuracy. Ethics comes into more stark focus here than in the case of an AIHS; it is citizens rather than job applicants who might be subjected to unfair treatment—which would be on the part of a government rather than a company.

Bias From… We Can’t Tell You

You’re at an interview. You seldom smile. Human interviewers might assume you are a serious person—or perhaps something else. But what if the only time you smiled was when the interviewers mentioned the (large) salary package, the AIHS picked up on that, and interpreted your smile as “the interviewee is in this only for the money”?

If that sounds far-fetched, a 2019 case centring around “unfair and deceptive practices” by a recruiting company raises numerous questions about AI vis-à-vis bias and the hiring process. The company still sells video interviewing software, possibly divested of some of the capabilities advertised earlier:

  • Its ability to gauge—from videos of job interviews—aspects ranging from the cognitive abilities to the emotional intelligence of the interviewee
  • Its ability to analyse applicants’ facial expressions and intonation to arrive at its recommendations

An important consideration here is the algorithms the company used were not made available to users of the system; the system also used algorithms about which, apparently, the company itself wasn’t knowledgeable.

In our example of facial recognition systems developing bias because of skewed training data, some people knew what the training data was. In the case of this interview-video analysis software, it would probably be impossible to detect the origins of biases—assuming they did exist.

Even if we were to believe the system had no biases, would it be ethical to use such a system—knowing that AIHS can exhibit bias? Also consider: If a system’s statistical models are based on a person’s behaviour during an interview, would it be ethical to say “We cannot hire you because our special computer here predicts that your future behaviour in the workplace would not be good for our company?” This would not just be unethical, it would be nonsensical.

However, going in the opposite direction—not revealing to the person that they were disqualified based on something whose workings they themselves did not understand—would not be nonsensical, but possibly more unethical.

The controversy around AI analysis of interviewees can be appreciated from the fact that in 2020, a US state introduced an Act to impose rules and limitations on the use of AIHS, including the requirement that interviewees be told how the AI would work—on what data it had tracked during their interview.

We might seem to have been presumptuous when we said “…assuming biases existed.” Considering the ways in which bias can come about, this might not be unreasonable:

  • In the case of system-generated bias, an AI system learns patterns it is not explicitly programmed to learn. For instance, in a city with significant affluence disparities among neighbourhoods, say a company prefers résumés from applicants who live within a certain radius of their office so that employees’ commute times will be shorter. If the office is located in a large, affluent neighbourhood, this condition might exclude applicants from less-affluent neighbourhoods; a résumé-screening AIHS might learn from this condition that applicants from those neighbourhoods were less likely to perform well at the company.

As a matter of fact, AI systems do learn rules on their own.

Unintended Lessons from Past and Present Patterns

Continuing the example of AI in the hiring process, let’s think of something simpler than a facial analysis that an AIHS might do. It could take care of mundane tasks such as parsing résumés and rejecting those that clearly indicate a poor fit, gauging the performance of candidate-sourcing companies, and so forth.

It turns out even these aren’t mundane. 

The world’s largest online retailer discovered in 2018 that the AIHS it had developed had learnt to discriminate against women:

  • Seeing that more men were employed with the company, it “learnt” that being male was a desirable trait on an applicant’s résumé.
  • It detected a pattern of more men having been hired over the past few years for certain roles—and decided to penalise the résumés of women applying for such roles.
  • The AIHS went so far as to penalise résumés that included the word “women’s”—for instance, résumés with the phrase “women’s baseball.”

The company detected the system’s pro-male bias and, therefore, never deployed the system. Even so, we must consider that a bias that favours men is relatively easy to detect. We cannot entirely preclude the possibility of an AIHS developing, for instance, a ludicrous belief such as “the ideal résumé would be that of an ex-lacrosse player named Jared.”

We didn’t make that up: In 2018, a résumé-screening company found that a certain AI system had somehow picked up on exactly that idea. Was this a freak mistake? Or perhaps a result of jocular intent on the part of a lacrosse-playing programmer called Jared? It was not! But coming to the topic of intent…

Unethical Outcomes With—and Without—Malicious Intent

In a widely publicised case of intentional abuse of AI, a large auto manufacturer used AI-enabled software in some of its models to unfairly pass emissions tests. In a couple of its car models, emissions would be high while the car was delivering high fuel efficiency, and vice versa. The software could detect when an emissions test was underway; at that time, the car’s emissions as well as fuel efficiency would become lower.

Can’t we just dismiss that as unethical behaviour by a large company?

Maybe—or maybe not; AI systems that promote certain outcomes could help a company inadvertently achieve certain outcomes in an unethical manner. There is a fine line.

AI has been widely adopted in financial technology: Financial institutions can benefit from AI in a long list of ways—from gauging creditworthiness to determining optimal pricing for financial services. What could go wrong when an AI system decides it would be a good idea to approve a loan, for someone with a relatively low credit score, at a higher-than-average interest rate?

People know such transactions as predatory practices; an AI loan approval system designed merely to maximise profits and minimise risk would “imagine” them as… well, profitable and low-risk. And yes, the bank would benefit while financially non-savvy customers fall deeper into debt than they intended—without knowing it was doing so unethically. Or would bank officials know about the ethics of the matter?

Finally, consider what seems to be—and might well be—a case for the use of AI systems in the loan-approval process. In the US, human lenders in face-to-face meetings with loan applicants charge more, on average—from people of certain ethnicities—for purchase mortgages; AI lending algorithms do the same but at a level that is 40 percent lower. That is, they discriminate less on the basis of race.

But when they are used, no specific human can be held accountable for the higher interest—because the system only looked at numbers, not faces! Talking about faces…

Pre-Interview Social Media Profiling

We mentioned the large e-commerce companies whose AIHS inherited a certain bias based on past outcomes. In analysing employees’ social media activities, some employers exercise voluntary bias—sometimes for good reasons, sometimes not.

A 2020 research study on about 40 recruiters revealed that they considered significant certain information on the Facebook pages of a large set of potential employees—including relationship status and religion—that was irrelevant to the positions they were applying for. That in itself would qualify as unethical—but many months later, two different sets of recruiters examined the Facebook pages of 80 people, who had been hired, from among the earlier set. The researcher’s conclusion: Considering, versus carefully ignoring, irrelevant information such as religion had no impact on the correctness of the recruiters’ predictions about turnover or performance.

In the context of that research, imagine that an AIHS that scoured the social media content of would-be employees might:

  • inherit the conscious biases of the data it was trained on (“Downvote left-leaning/right-leaning candidates”)
  • learn many more biases because of the complexity and sheer volume of data
  • take certain information out of context
  • link and connect pieces of information in unexpected, potentially damaging ways
  • be trained to ignore certain aspects—such as political leanings—but not factors it had discovered on its own by linking unconnected pieces of information
  • still end up not correctly predicting turnover potential and/or performance!

These possibilities are plausible guesses based on the examples of unexpected outcomes we’ve mentioned.

Further Considerations

The subject of ethics in the context of the use of AI systems by businesses is complex. In this post, we’ve primarily looked at how they can affect individuals as a result of human or system-generated bias. There are considerations around how their use affects society at large, technical or algorithmic considerations, and the all-important consideration of accountability. We’ll look at these aspects in future posts.

References

You may also like

The Future is low-code
The Future is low-code: adapting to the Inevitable

The term low-code was introduced in 2016 [ref]. But technology has been around with us for quite some time. In fact, the first technology that evolved into what low-code is today appeared back in the 1970s as part of what we know as Generation Programming Language and Rapid Application Development.

Read More
MAAS APIs TEXT CLASSIFICATION

In the previous article, we discussed the 10 best free, open-source libraries that can be used for categorizing text, which you can find here. But that is an ideal case when you’re neck-deep into coding and machine learning and have the time to spend hours investing in development. So if

Read More
Scroll to Top
APPLICATION FORM

Ethical Considerations Around the Use of AI by Businesses, Part I: Bias