Why AI Can’t Always Get It Right. The Risk of AI Governance and Automated Decision-Making

Why AI Can’t Always Get It Right. The Risk of AI Governance and Automated Decision-Making
Created by MidJourney

The more informed you are, the better decisions you make. The more rational and level-headed you are, the less you get to regret those decisions. It helps if you have access to reliable studies, connect data points and learn from previous experience. On these grounds, there is no reason for an AI to be a poor decision-maker. Unless ChatGPT tattoos its boyfriend’s name next to its heart, I don’t think it’s going to prove it has worse judgment than humans.

AI can do a great analysis of a situation, can see patterns, and will give you the pros and cons of your decisions. It is having a tough time making decisions on an individual basis. The technology isn’t built that way. As a result, AI doesn’t deal well with unique situations. There are reasonable concerns when it comes to AI Governance and automating the way we make decisions for us and others.

When AI Governance Goes Wry. From Bias to Error

Let’s put it delicately and say you can’t judge a person by the color of their tie.

In a court of law, when hiring talent, in school admissions, when you evaluate employees or divide resources, it’s neither helpful nor ethical to look for patterns. AI can be biased based on the data it is trained on. If the data is biased, the AI will make decisions that reflect that bias.

Celina Bottino at The Institute For Technology & Society of Rio de Janeiro sums it up perfectly:

“AI could make the judicial system faster and expedite the decision-making process, but [without] final decisions and having the last say. We can’t and shouldn’t put any machines in place of humans.”

Whenever AI is in a decision-making position, it seems to make its percentage of unfair decisions we, humans, would immediately identify as such. A doctor speaks about mammograms on YouTube. She gets her account suspended for offensive language. A work of art gets censored for nudity (not now, Florida!). Some exciting resumes don’t filter through because your AI learned that the best employees come from a very, very white school.

It’s not that a programmer decided to insert bias into the algorithm. You don’t combat racism if you write a code line “if: racist, then: don’t be”.

AI isn’t built to work with individual situations. Pattern learning is at the beating heart of generative AI.  Not judging on an individual basis is at the core of discrimination, both in human and machine learning. You can’t prevent AIs from occasionally making the kind of judgments drunk grandpa makes at the Thanksgiving table. “Then how come so many of those nice students at Harvard are white?!”

If you allow AI to make critical decisions, you risk building a dire society for us humans. On the bright side, it will do it really fast and really cheap.

The Uneasiness of Sensitive Decisions

There are ethical issues associated with decision-making. Which parents get to adopt what child? Who is guilty in a court of law? Which matching patient gets the available organ? Who gets the job, and who gets fired? There is a lot of uneasiness around letting algorithms decide on issues that require empathy and understanding of context.

I expect we will see a massive wave of regulations to tell us that we can use AI as a decision-making tool, but a human needs to do the final validation.

AI lacks context, common sense, and emotional intelligence. At work, us humans are made reasonably liable for our decisions and actions. AI’s disclaimer is the same as the one you tell your roommate when you give them relationship advice: “But what the hell do I know?”

You Can’t Solve Every Problem Looking for Patterns

As humans, we have many complex ways of learning. Machine learning, generally speaking, means looking at patterns. This is good enough, if not great, for most repetitive tasks. It’s not when you deal with outliers.

If you look for patterns, you miss Gabor Mate, Serena Williams, Neil DeGrasse Tyson, Eminem. You would have missed Ted Bundy. You couldn’t have predicted the fall of the Berlin Wall by following the patterns the USSR would have introduced into the system. You couldn’t have bet on who would win the space race.

When presented with unique situations, AI’s limitations become obvious. Ciaran Rogers from The Digital Marketing Podcast hits the nail on the head with the following:

“Nobody got excited about AI-based Virtual Assistants, did they? A Virtual Assistant based on AI learns by repetition, it solves frequent issues. Here’s the problem: clients ask for an assistant when there’s a unique situation. Things don’t go as they should. (...) How often do they ask to speak to a human assistant? Most of the time, VAs are just there to frustrate you, and that’s because they don’t respond well to situations they don’t encounter time after time after time. That’s one issue with AI that’s going to be hard to deal with.”

Whenever stakes are high, we will want human validation

It’s not a question of whether AI might eventually make better decisions than us. After all, humans are riddled with their own problematic biases. Machine learning does what the label says: it learns.

“A lot of it works by scaling up. We have all these emerging techniques, but as we scaled them up with more data, more computes and built larger and larger models, suddenly they can do things that they couldn’t do before”, Vincent Conitzer, head of Technical AI Engagement at the Institute of Ethics in AI explains.

The bigger question is whether AI will be allowed to be the final decision-maker without human validation.

Even in a world where your car is fairly reliable to drive you without incidents, you will still need a driver’s license to sit at the wheel. Your traffic police will still want you alert in the driver’s seat, making sure you are there to intervene if the AI fails.

AI’s limitations: it’s not good at making empathetic, fair, contextualized decisions

Artificial Intelligence software is already causing serious issues when given power. A firefighter was arrested for obstructing the traffic. His fire engine almost got toed as he was assisting the victim of a collision on a busy highway. As you might imagine, the AI simply couldn’t evaluate the uniqueness of this situation and went to its most frequently encountered circumstance: a badly parked car.

I’m kidding. It was a human cop who decided to handcuff a firefighter on a mission and hold him in the back of a police car for half an hour. For bad parking.

We should really try hard not to screw ourselves out of reliability so that we can continue to make big decisions for humanity.