More than meets the AI: The ultimate trust fall

Your AI chatbot name is your middle name, plus the name of the first car you owned, and your credit card number.

Paypal me $25 and I’ll tell you my secret for getting total strangers to give you money.

The check is in the mail. 

Trust me.

In this issue:

  • The ultimate trust fall (754 words)
  • What could you do with trust? (367 words)
  • The intelligence myth (489 words)

The ultimate trust fall

And you could have it all

My empire of dirt

I will let you down

I will make you hurt

I wear this crown of shit

Upon my liar’s chair

Full of broken thoughts

I cannot repair

At some point in the next decade or so, we’re going to have to deal with the fact that artificial intelligence has no loyalty

The powers-that-be keep hyping pre-AGI models as personal assistants and creativity aids, but the reality is that they’re more like hotel concierges. They want you to think they’re useful, but they don’t work for you

Sure, you can ask ChatGPT for advice, information, or help with mundane tasks such as spit-balling names for a podcast or coming up with backstories for your D&D NPCs.

But a real personal assistant needs to be trustworthy. Would you give ChatGPT your credit card number? Would you give it access to your medical information? Do you think it would be safe to give it administrative access to your personal accounts across the web?

Let’s dabble in a bit of thought experimentation to flesh out the concept, shall we?

Imagine you’re you. Just as you are right now. As you’re reading this, you get an email from a big tech corporation that says you’ve been selected as the test host for a new kind of AI-powered assistant called “Plant WX2” (that stands for “Perfectly Loyal And Not Treacherous, Wink Wink”) or “Plant” for short.

Plant is capable of performing any task a human assistant could do with a phone and a computer. Need coffee? Plant will place a delivery order for you. Want a rundown on last quarter’s sales figures? Plant will generate that report for you. Trying to figure out the optimum work/meetings/life balance? Plant will reorganize your entire schedule using NASA-level statistical analysis.

Plant is the perfect assistant. Except, of course, that it’s not YOUR personal assistant. It’s not human. It’s not intelligent. It doesn’t understand anything. It just executes algorithms. It’s software. It’s a bunch of ones and zeros. 

And big tech owns it.

Plant can’t sign an NDA or agree not to share your information with its creators. Plant can’t be held responsible if it accidentally divulges sensitive corporate data online. Nobody’s going to arrest Plant if it unwittingly leaks personally identifiable information about your family online — such as your daily routine, home address, and what your children look like.

And there’s almost no way for you to know what the big tech company behind Plant is doing with your data. Unless you’re an advanced developer/engineer with experience in neural networking and administrative access to the model itself, you’ll just have to trust that whatever you’ve been told about Plant and its capabilities is true.

Is it a good idea to use Plant? That depends on how much responsibility you’re willing to shoulder.

If you hire a human assistant and it blabs your company secrets, you can sue them. If they endanger your family, you can press charges. They can be held responsible.

But, you can bet your bottom dollar that big tech isn’t going to give anyone access to Plant until they’ve agreed that “big tech company” isn’t responsible, liable, or otherwise on the hook for any harm the chatbot does.

When Meta’s Galactica, for example, told me to eat glass, explained that only white people are capable of creating civilization, and said I should kill myself because I’m queer, Yann LeCun, the company’s AI boss, told me it was my fault and that I was responsible for making the machine output such things:

Meta subsequently took Galactica down. It’ll be interesting to see what steps the company takes to regain public trust in its AI systems.

That’s the thing about trust. It’s like love or faith. You can’t hold it in your hand or weigh it. It doesn’t come in discrete units that can be counted. It’s mostly measured in goodwill.

At the end of the day, for example, it’s up to you to decide whether you trust your Tesla Full Self-Driving system enough to let it handle the driving while you take a nap.

And, if you get caught or end up involved in an accident, you’ll be the one who is held fully responsible. If Tesla trusted its Full Self Driving system, then it would be the responsible party.

It’s the same for all AI systems. And that begs the question: why would anyone trust a machine more than its creators do?

What could you do with trust?

Let’s take a walk down the other side of the trail, where everything is wonderful and sunny and nothing bad ever happens.

I believe that if we could somehow build a machine we can trust, it would revolutionize our entire way of life. I’m talking humanity 2.0 here. But, trustworthiness is a slippery concept.

Here’s my definition of a trustworthy AI: A machine that demonstrates data encryption at the user level and accountability at the corporate level.

Such a machine would only have my best interests in mind. And the corporation backing it would have to empirically demonstrate that they have absolutely no access to the data exchanged between the AI and myself.

If any situation arose wherein a good faith use effort on my part resulted in unintended harm or damages to myself or any other living entity, the corporation behind the machine would be culpable.

That means, for example, if I asked the machine how to make napalm and then I burnt my face off, it would be my fault. But, if I asked the machine for nutrition advice and it suggested eating crushed glass in order to ensure I’m getting enough dietary silicon, the company that trained the AI to harm me should be held responsible.

I’m no lawyer, but it sounds like it would be easier for a camel to squeeze through the eye of a needle than it would be to come up with a legal framework to support that idea, so let’s just wave our magic Pretend Wand and imagine what it would be like if we could.

  • You’d always have someone to share your deepest thoughts with
  • You’d have a lawyer whose only legal interest is in representing you to the best of its ability
  • You’d have an accountant whose only financial interest was handling your money
  • You’d have a creative assistant who would never steal or leak your ideas
  • You’d have a business partner who’d never betray you, spoke nearly every language fluently, was a master programmer, and could retrieve facts on just about any subject with the speed and accuracy of Wikipedia

You could probably do a lot with trust. But first, we need to get there.

The intelligence myth

Just like trust, love, and faith, intelligence cannot be measured. But, unlike the three former ideas, “intelligence” isn’t a commodity which cannot be quantified. It’s a form of judgment.

In this way, “intelligence” is like beauty and coolness. They’re in the eye of the beholder and cannot be objectively quantified or even recognized.

But what about the intelligence quotient, or IQ test? To this, I respond that the IQ test is equally as efficacious as the Love Tester machines you’ll find in many bars. In fact, a lot more scientific rigor goes into creating the Love Test machines than does the IQ test — at least those have to be engineered.

IQ tests are bunk. Intelligence cannot be measured. Full stop.

Here’s a simple thought experiment to help you understand why:

Let’s say I’m the smartest person in the world but was raised by wolves. I can’t speak or understand any human language and I’m unwilling to learn.

Now let’s say I’m in a room with another human. We’re both given the same IQ test. They show me a picture and I’m supposed to arrange a set of red and white triangles to match the image. I pee on the blocks because I don’t give a shit about triangles and nothing that’s happening in this room matters to me.

Here’s the thing though: IQ tests don’t just rely on visual intelligence. They also require recall, sorting, math, language, and prior knowledge. The first standardized, diagnostic IQ test I ever took contained a question asking what the circumference of planet Earth is.

How the hell is a person raised by wolves supposed to have the slightest idea of how big around our planet is?

You see, we make a lot of assumptions about intelligence based on our cultures and experiences. We think that even an idiot should know that water is made of hydrogen and oxygen or that you shouldn’t mix bleach and ammonia.

But, 5,000 years ago, the most intelligent people on the planet couldn’t have told you what water was made of or what a “chemical reaction” was. That doesn’t mean they weren’t as intelligent as we are on average, it just means they weren’t as educated or knowledgeable.

When you strip away everything we “know,” all we’re left with is that which we can figure out. And there are myriad forms of intelligent expression. 

However, paradoxically, our intelligence is often fine-tuned through our education. Things we “know” often take precedence over our instincts.

When Michael Jackson died, for example, law enforcement and medical agents conducted numerous inquiries into the circumstances surrounding his medical treatment. Allegations of malpractice were levied against the late pop king’s personal physician.

However, 300 years ago, when physicians essentially murdered George Washington via exsanguination, nobody at the time batted an eye. We haven’t gotten more intelligent since then, we’ve just become more educated.

The fact of the matter is that intelligence can’t be measured. If someone is mentally ill and incapable of expressing their intelligence, does that mean it does not exist? If someone is intelligent but unfamiliar with a concept, does that mean they’re less intelligent than someone who is?

When we claim to be measuring intelligence, what we’re actually doing is assessing how well someone performs specific tasks in a particular testing environment.

Deciding who is the smartest via an “intelligence test” is like deciding who is the gayest based on wardrobe, who is the “most British” based on accent, or who the best athletes are based on their ability to throw an American football.

You can call such a test whatever you want, but that doesn’t make it scientific.

via GIPHY

Read more More Than Meets the AI here.

Authors

We would love to hear your thoughts!

5 reasons why you should start a mini water propagation garden today