More than meets the AI: The biggest scam in tech

Preface: I love AI. I’m super long on LLMs, computer vision, and robotics. But not all AI is good. Some of it can only be used for harm.

I used to have a pet dog when I was a kid, a massive rottweiler who was as lovable and sweet as she was big.

One day, a fellow dropped me off at home after a buddy of mine had some car trouble. This guy was several years older than me and my mother was concerned that I was hanging out with people who were too mature for me.

The dog must have sensed my mother’s trepidation. When I tried to invite the guy in for a glass of tea, she went bananas. Barking and growling. She wouldn’t let the guy through the front door.  We’d never seen her do that before.

Once the gentleman had left, my mother decided that the dog must have sensed something was off. “There’s something wrong with that grown man,” she declared, “dogs can tell!”

Over the next few weeks, I caught a ride with several different people while my buddy’s car was in disrepair. One of them, a boy my age named Stephen, would come inside for iced tea after dropping me off. The dog loved him. She greeted him at the door every time, excitedly, her big doggy butt wiggling with joy as he bent down to pet her.

A few months later, after we stopped hanging out and I was back to riding with my regular crew of friends, Stephen murdered his entire family. 

In this issue:

  • Human outcomes (228 words)
  • Passing the buck (717 words)
  • Who watches the watchers? (832 words) 

Human outcomes

The fact of the matter is that my dog was entirely unsuited for that kind of decision making. Only a total moron would put a black box intelligence in charge of security. It’s one thing to trust that the dog will defend you in a violent situation, it’s absolutely idiotic to expect the dog to judge people’s character and intentions.

That’s why I have one immutable rule when it comes to non-human intelligence: if it can negatively affect human outcomes, don’t use it. Throw it in the damn trash.

AI that predicts whether someone is loan worthy? SCAM

AI that predicts where police presence will be needed? SCAM

AI that predicts who the best candidates for a job are? SCAM

AI that predicts criminal recidivism? SCAM

It doesn’t take a tech expert to understand why. AI isn’t magical. It can’t glean insights from data that would be impossible for a human.

What this means is the above AI systems don’t actually do anything novel. They speed up or automate manual processes. If your process is bigotry, it’ll speed up or automate that.

So, why then would police, hiring managers, loan agents, and judges need or want AI to do their jobs? 

We don’t honestly believe that people in such important positions would use a technology they don’t understand to make determinations that will affect human outcomes, do we? 

Passing the buck

Let’s say you’re the hiring manager at a major company. You’ve now been in your position for a couple of years and, so far, all of your new hires are doing their jobs well. Your CEO says you’re doing a great job.

Then, out of the blue, the government fines your company for discrimination.

It turns out, whether intentional or not, you’ve been exclusively onboarding straight, white men. 

You point out that you’re just doing exactly what you were told: you’re looking at the applications, conducting interviews, and finding the best fit for the position.

It isn’t your fault that the best candidates always seem to be white men. You tell your CEO that you don’t even look at the names, genders, or any other personally identifiable information before you select candidates to interview. 

They are not pleased.

You wonder if you’re supposed to just start hiring Black people, queer people, and/or women even if they aren’t the best candidate for the job.

Never mind that you’re entirely missing the point, out of touch, and unsuited to be in the position you’re in. Never mind that there are literally thousands of scientific, peer-reviewed, studies demonstrating that diverse workforces outperform non-diverse workforces in every domain. Never mind common sense, decency, and morality. Because right now, you’re just worried about yourself.

You could quit. But then you’ll have to explain to your next potential employer why you left your last company right after the government fined you for discriminatory hiring practices.

You could stop practicing harmful discrimination.

But, as we both know, if you’ve managed to make a career out of hiring only those candidates who represent your current workforce, you don’t have the slightest clue how to account for applicant diversity. Either that or you simply refuse to because you ignorantly choose to believe that thousands of scientists and analysts are wrong about diversity (or you’re a bigot).

That leaves only one option.

You hire a billion-dollar AI company that specializes in HR systems whose software is used by most of the Fortune 500. You explain to your CEO that, going forward, your company will have a state-of-the-art hiring system. 

They’re pleased.

The government backs off. 

Now, your applicants are screened in a different, dumber way. An AI system that’s demonstrably stupider than my old rottweiler parses your company data and starts recommending applicants based on how closely they match the resumes, interview styles, and testing capabilities of your company’s current and previous successful hirees.

If most of the people you’ve hired present with the same general backgrounds, and the AI uses a database built on that data, the odds that the AI will hire another person just like the ones you’ve already hired are beyond substantial.

That’s also how AI-powered predictive policing, lending, and recidivism work. They’re massive scams and the only purpose they serve is to give government agents, C-suite executives, and HR departments “someone” to pass the buck to.

Hark back to this issue’s introduction. If Stephen had murdered me, my mother would have said “by golly gee wow. I never thought Stephen was bad. The dog liked him so much. How could any of us have known?”

But the other dude, the one who gave me a ride home after my friends and I got stranded on the side of the road, he’s the one my mother refused to even be polite to.

My mother wasn’t a dumb woman. She didn’t honestly believe my dog was magical. It was just convenient. When the dog backed up her own disdain for a person, the dog was right because, as she would have said, “mother nature knows best.”

But when the dumb dog was wrong, she didn’t reassess her personal beliefs. As the “human in the loop,” she figured she was smart enough to tell when the dog was right or wrong.

There exists no paradigm in which my mother could be convinced that either her or the dog’s snap judgments on whether a human was “good” or not were ignorant and harmful. She couldn’t fathom a world wherein such snap judgments were useless and should be avoided. 

And the same is true for hiring managers and CEOs who stand to benefit far more from blaming the algorithm than by actually practicing human-centered hiring practices. 

Who watches the watchers?

Lots of companies claim to have a solution for this. “Human in the loop” is the most popular one. This, despite the fact that countless studies have demonstrated that humans will offload cognitive burdens to any system they see as having providence — meaning we trust machines and robots more than most other humans.

In predictive policing, they claim that the algorithms merely make “suggestions” as to where police coverage is needed based on historical data. LEOs are supposed to take these suggestions and choose whether to implement them.

But what purpose does that actually serve? Police officers don’t travel the globe arresting people in strange new cities every day. They’re supposed to know their communities better than anyone else. They know where so-called “crime hotspots” are. The system can’t tell them where their presence is needed because it is literally impossible to “predict” crime.

An AI system cannot tell you where and when a crime will happen any more than a psychic can tell you what the next lottery numbers will be.

Thus, the purpose these AI systems serve is to give leaders somewhere to pass the buck when the only answer they have for the tough questions is one that will tank their business.

  • Why does law enforcement overpolice minority communities?
  • Why do judges issue stiffer sentences to Black men without priors than white men with criminal records?
  • Why does almost every successful STEM company in the world hire mostly straight, white, male work-forces?
  • Why do Black families fail to secure home loans at the same rates or amounts as white families who make less and have worse credit?

Now, thanks to these bullshit AI systems that don’t actually work, the answer to all of those questions is “because the algorithm said so.”

Currently, governments and organizations around the world are harrumphing, hemming, and hawing over how best to “fix” these systems.

They want third-party corporate auditors to ensure the systems are functioning properly. Translation: they want a company that can be externally incentivized to ensure their clients (the companies) are happy with the results of their audits.

They want government agencies to ensure compliance. Translation: they want bureaucrats who know less about AI than the laypersons using the systems to ensure that the companies using them keep their paperwork up to date.

They want self-auditing. Translation: the same companies who’ve been hiring straight white men almost exclusively for decades, while claiming that they don’t have discriminatory hiring practices, can continue doing exactly what they’ve been doing without fear of any accountability.

There is only one solution to the problem of AI systems designed to dictate human outcomes: delete them. Trash them. Throw them away. 

They’re as useful as a grocery store full of spoiled, rotten food. They’re as smart as my dumb old, lovable dog who once spent 10 minutes soliciting pets and snuggles from a psychopath who would go on to kill three adults and a toddler, as they slept, for no apparent reason.

I never held that against the dog because I’m not an idiot. I never thought it was my dog’s job to decide which humans were good, which were deserving, and which were valuable. 

That’s a really, really stupid job for a dog. And an even stupider one for AI.

The last argument I’ll touch on is the dumbest. It’s “you can avoid discrimination by removing identifiable information from the data it parses.”

You have to be pretty ignorant about how AI works to believe that. It’s like saying “I don’t see color, so I can’t be racist.”

The fact of the matter is that removing pertinent information such as race and gender makes it easier for AI systems and humans to discriminate against minorities.

Hiring managers and AI systems, for example, might both treat a gap in work history as a negative. If you have three candidates who are equal in every way except one of them has a six-month gap on their resume, you might be tempted to focus on the other two.

But what if the gap on the third is because of a disability, child-care, or other human situation?

The status quo punishes women for getting pregnant. It punishes people for getting cancer.

It punishes Black people for being accepted into prestigious colleges at a lower rate than their white, equally or less academically-talented counterparts. It punishes all minorities for not being pipelined from one company to another through network affiliations with existing employees at target outfits. 

Furthermore, these AI systems are built from the ground up to punish neurodiverse individuals and those with physical disabilities that affect their speech, facial muscles, and ability to think, speak, and gesture like a neurotypical, abled person.

That, readers, is the epitome of harmful discrimination and thousands of corporations around the world have leaned into it.

We used to have to wonder if the people making decisions that affect human outcomes were bigots. Now, whenever they pass the buck to an AI, we don’t. 

via GIPHY

Read more More Than Meets the AI here!

Author

We would love to hear your thoughts!

5 reasons why you should start a mini water propagation garden today