Economic

When AI decides for you: discrimination, black boxes and ethics in the age of algorithms

Machine learning algorithms require large amounts of data, but without proper control they they can adopt false patterns from the past. There are more and more cases illustrating this.

In Amazon’s case, artificial intelligence pond a powerful tool in the fight against counterfeiting: automated systems allow you to quickly detect and block counterfeits, which has reduced the number of complaints from brands by 35% in five years.

However, not all of the company’s AI initiatives have been successful. For example, a recruitment system that automated the recruiting process revealed a bias against women and began rejecting their resumes. A similar situation arose at Apple. And the chatbot of the Canadian airline Air Canada generally misled customers by spreading inaccurate information.

So, sometimes the use of AI brings business is not a benefit, but a problem. It is interesting to see how companies change their attitude to innovation. This allows for a better understanding of what should be paid attention to when implementing such technologies.

Apple too promotes AI in its products: the voice assistant Siri has started using ChatGPT, the camera is learning to display information about the opening hours of establishments, and you can now create images on the iPhone. But not all technological innovations turned out to be safe. As early as 2019, the company was criticized for introducing algorithms into the Apple Card credit system — a virtual card issued by Apple in cooperation with the Goldman Sachs bank. Then the resonance caused a bias in the system’s decision-making.

AI promises to revolutionize business, but without proper controls it reproduces and reinforces human biases

Let’s consider how it happens on the example of high-profile cases of Amazon, Apple and COMPAS.

IA “FACT” already wrote that machine learning algorithms learn patterns based on historical data. If this data contain prejudices, AI takes over them. This can lead to discriminatory decisions, even if the developers did not intend to do so.

A well-known case when in 2014 Amazon developed system for automatic selection of candidates for technical positions. The algorithm was trained on resumes submitted over the previous 10 years, most of which belonged to men. As a result, the system started reduce evaluation of resumes that mentioned the words “female” or “female”, for example, “captain of the women’s team”. The project was closed, but this case pond a lesson in how data can build bias into AI.

In 2019, Apple Card users reported that their spouses received significantly lower credit limits, despite joint finances and similar credit histories. This sparked outrage and an investigation by regulators. Although the official investigation found no direct discrimination, the case highlighted how opaque algorithms can lead to unfair results.

The COMPAS system is used in the States to assess the risk of criminal recidivism. Research by ProPublica revealed, that black defendants were more likely to receive high risk scores even if they did not reoffend, while similarly situated white defendants received lower scores. This sparked a serious debate about the fairness and transparency of the use of AI in the judicial system.

See also  Production is growing, and sales are stalling: why Ukrainian gunsmiths face obstacles in the domestic market

In 2015, Google launched a new automatic face recognition feature in its Google Photos service. The system was supposed to automatically determine who is depicted in the photo and suggest tags. But in a few weeks, one of the users laid out on Twitter a screenshot where his black friends were mistakenly signed as… “gorillas”.

The reaction was immediate: a wave of outrage on social networks and accusations of racism. Google quickly removed the “gorillas” tag altogether—not improved it, just removed it. And still, after 10 years, Google Photos does not recognize gorillas, because it is afraid to make a mistake again.

In 2016, Microsoft decided to show off artificial intelligence. They created the Twitter bot Tay, a “teenager” who was supposed to learn online communication. But that’s all gone not according to plan: just hours after launch, Tay started posting racist, sexist and Nazi posts.

All because the bot “absorbed” everything that users wrote to it. And some specifically pelted him with toxic content — and Tay quickly “took an example.” Microsoft had to shut down Tay urgently and issue a public apology.

To avoid the pitfalls of biased AI is worth analyze data for biases, making sure before training the algorithms that the data does not contain discriminatory patterns. It is important to make algorithms transparent by using models that can be interpreted and explained. To identify and eliminate potential biases, it makes sense to involve diverse teams and to regularly review the results by conducting audits of AI solutions, especially in critical areas such as finance, healthcare and justice.

The “black box” effect: why even developers don’t always understand how AI works

Imagine the situation: you were refused a loan. Reason? So the automated system decided. You ask: why? And no one can give you a clear answer – because this decision was made by artificial intelligence, which operates with millions of parameters, but does not explain the logic of its actions. This is called the “black box” effect.

Artificial intelligence, especially that based on deep learning (deep learning), is a complex network of mathematical functions that analyzes millions of examples and makes a prediction or decision. The problem is that even the developers themselves often cannot explain why the system chose this or that option. She “sees” patterns that remain invisible to us.

As a comparison: it is like asking an experienced doctor why he believes that a patient is seriously ill, and hearing the answer: “I just feel.”

When we don’t understand logic, we can’t test it. And we cannot see whether there are errors, racial or gender biases built into the model. This is not only a technical problem, but a deeply ethical one.

Thus, in the aforementioned COMPAS AI system, which helped judges determine whether a person should be released on bail, the algorithm regularly overestimated “risk of relapse” in blacks and underestimated in whites.

And in another story, the research company Anthropic tried figure it out, which occurs within a large language model (such as GPT). And she found that some “neurons” in the model can be responsible for abstract concepts like “pessimism” or “lyrics”. That is, AI forms an inner world that we cannot see.

See also  Funding with uncertain consequences: will Ukraine receive the necessary support from Germany?

Scientists are already working on the so-called “explainable AI” (Explainable AI). These are systems that not only give an answer, but also explain it in simple words. One approach is SHAP: it estimates how much each parameter “weighted” in the model solution. For example, how much your income, place of residence or education influenced the rejection of the loan. Another LIME approach creates a simple copy of a complex model that can already be understood.

These tools help put people back in control. Because when AI decides, and no one knows how, it is no longer technology, but the blind dictation of a machine

When AI goes wrong, who is responsible?

Two years ago, a customer asked Air Canada’s chatbot if there was a special funeral discount program. Boat advised: yes, we compensate. And then they refused compensation, because such a program was no longer active. The Canadian sued — and won. The court ruled: the airline must be responsible for what its AI says.

What does it show? AI is not some abstract “neural network in the cloud”. If it went wrong, the company that launched it should be held responsible.

Back in 2019, the European Commission formulated the principles of ethical AI. Among them: transparency, privacy protection, fairness and human control.

Some companies have their own codes. Yes, Capgemini requires, so that all AI projects undergo an “ethical check” to see if they do not harm people or discriminate.

And Google after the scandal with its own ethical researchers (who were fired because they criticized the work of Google AI) stated, which created a special council on AI issues. Although there are questions about its effectiveness.

This is why the concept of “algorithm audit” appears. The idea is simple: an independent external team checks whether the AI ​​works correctly, does not discriminate, or does not lie. Like accounting — only for neural networks.

Yes, the company OpenAI announced, that prior to GPT-4’s launch, its system was being audited by external researchers for bias, malicious advice and policy violations.

And the EU is already preparing a law that will oblige companies to undergo such checks for “high-risk” AI — in medicine, justice, banking, etc.

So if the AI ​​has caused harm, the company using it is liable. Ethics is not a philosophy, but real rules that must be followed in order not to end up in court. Auditing is not a formality, but the only way to see what is hidden in the code and parameters of the models.

…These stories—from Apple Card and Amazon’s discriminatory decisions to Google Photos’ racist glitches and Microsoft’s devastating Tay blunders—sparked a broad public debate: Should algorithms decide a person’s fate? And aren’t the same prejudices that we have been trying to fight for decades embedded in them already at the start?

Because when a black box makes a decision that even its creator cannot explain, we are not dealing with progress, but with a new form of uncontrolled power. AI that discriminates, degrades, misinforms is not just a technological problem. This is an ethical crisis.

And although companies rush to put out fires – apologize, rewrite the code, create ethics councils – this will not restore the trust of users. Because in a world where decisions are made by an algorithm, not a person, each of us can become a victim at one point. And no innovation is worth losing justice as a basic value.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Back to top button