I’m alive, I’m alive, I’m not alive, I’m Maroon 5.
Whoops, sorry for that gibberish. We’re not sure how that got in here. Other than, you know, the fact that a human pretty much had to type it. Ahem. Cough. Cough.
In this issue:
-Moving goalposts (504 words)
-How alive is alive enough? (463 words)
-The public transportation test (329 words)
Imagine you’re the commissioner of the National Football League (NFL). You get an urgent memo from the owners — the stakeholders you essentially work for — telling you that fans are losing interest because the average game scores are too low.
Careful analysis has demonstrated that the games with the highest scores draw not only the highest television ratings, but also contribute to an increase in ticket, concession, and merchandise sales for the involved teams’ next games.
Your task, as commissioner, is to increase the scores of as many games as you can by as many points as possible. The board expects to see palpable results in next week’s games.
They also warn you that any major shakeups — stuff that’ll divide the fans on issues the NFL isn’t prepared to address — are off the table. You can’t radically alter the rules of the game.
What do you do?
Traditionally, the NFL moves the goalposts. Literally. By placing the goalposts at the very far end of the endzone, field goal kickers have to contend with an extra 10 yards on every kick. When you consider that the average kicker’s max range is somewhere between 50-60 yards (with some exceptions) that 10 yards makes a huge difference.
If you, as commissioner, were to move the goalposts closer to the field, you could almost guarantee that more field goals would get kicked. But wait, there’s more!
You would also be guaranteeing that teams would have more opportunities for higher-scoring plays. Drives that end in field goals typically take less time off the game clock than those ending in touchdowns or punts. This means that there’s an exponential increase in scoring potential relative to moving the goalposts.
I point this out because I don’t think people understand what “moving the goalposts” really means.
We don’t move the goalposts to screw with kickers or fundamentally alter the game in such a way as to render previous efforts at winning it more or less functionally difficult in retrospect.
We move the goalposts when they’re no longer positioned to properly represent the purpose of the exercise. The goalposts, no matter where they are, are the same for both teams.
The exercise, for both teams, is to “output” a “good” football game. What’s “good” has changed a lot as the game’s evolved.
When I was a kid, the pro game involved a lot of rag-tag, improvised, “make it happen” kind of plays. By the time I was an adult, coaches were winning Super Bowls with bench-warmers operating timing-based offenses and defenders running stunts on every play. And today’s NFL games look nothing like those.
The game is always in flux.
It’s the same with AI. Alan Turing’s thoughts on what would make an AI sentient are based on sound logic, but they’re frozen in time. If he were alive today, based on his math acumen alone, I’m certain he’d have a different test for sentience than the one he came up with in the WWII era.
How alive is alive enough?
Is something wrong, she said, of course there is
You’re still alive, she said, oh, and do I deserve to be?
Is that the question?
And if so, if so who answers?
Here’s a thought experiment to help:
Imagine I created a robotic dog named Flap (that stands for, “fine, let’s argue philosophy”).
Flap is super advanced. Flap’s brain makes ChatGPT look like Zork. And my engineering is so amazing that Flap is capable of imitating a real dog’s movements in ways that Boston Dynamics’ engineers are at least 50 years away from.
Yep, you betcha, Flap is the most amazing robot this planet has ever seen. When I say it acts like a real dog, I mean it. It perfectly imitates all the animal behaviors it can (it doesn’t eat, poop, or hump your leg… but otherwise it’s just like the real thing).
Flap and my real, biological dog, Bella, love to play together. They even snuggle together when Bella gets tired. They’re inseparable, it’s as though Bella accepts Flap as one of her own. It’s absolutely adorable to see.
I believe that 9 out of 10 people would feel legitimate mental anguish if I introduced them to Flap, let them experience how amazingly dog-like Flap’s behavior and actions were, and then made them watch while I smashed the robot with a hammer until it ceased functioning.
I imagine at least one or two folks would become genuinely upset over it.
Those same people, however, probably wouldn’t give a damn if I smashed Flap if, instead of making it a dog bot, I had made it look like a cockroach.
Even if I explained that the roach version of Flap was actually more advanced than the dog-bot, I bet 9 out of 10 people would have no problem watching me squish it with my boot. A few would probably feel relief at no longer having to watch it scurry around on its creepy, roach-bot legs.
What if I made Flap look and act like a human? Like, a really “good” person. What if it was so convincing that, much like the dog version, everyone who met Flap became fond of it? “That Flap’s a really good person,” they’d all say after meeting the bot. “A great conversationalist too,” many would then add.
Which version of Flap deserves rights? Which version deserves to be protected from my hammer? All of them? None? The one you like the best?
The reality is that there’s no scientific measurement related to — whatever the hell term you want to use for a machine that deserves differential treatment from other machines based on a categorical distinction related to perceived or actual intelligence.
And if there’s nothing to measure, all of our observations are anecdotal. That means that the problem of sentience isn’t a scientific one, it’s a political one: whom do we elect to decide for the rest of us which machines are just machines and which ones are “more?”
The public transportation test
Here’s what I have to say about finding a way to label and identify sentience, digital life, and all that philosophical stuff: it doesn’t matter.
Now, that being said, here’s what actually does matter: what can it do? I don’t care what you call it. What can it do for me?
That’s why I’m so hype on AGI. Artificial general intelligence. By my definition, AGI is a machine capable of human-level cognition in any domain.
How would I measure that? Easy, I’d use “The public transportation test.” I’ve written about this in past issues, but it bears repeating here.
Any human-level AI, given a robot “body” that’s sufficiently engineered to accomplish any physical task a human could, should be able to take a 5-year-old kid to the grocery store and back home using nothing but public transportation.
The city will be chosen at random, at the time of testing.
- The robot will have no external connectivity, no internet, no safety monitor, no cloud-connected databases, nothing.
- The robot must keep the child safe and comfortable.
- The robot has to do everything “manually” just like a person with no smartphone or credit cards. That means reading bus schedules at the bus stop, paying exact change, watching to make sure it’s safe to cross the street, etc.
- It must hand select the groceries, inside the store, and wait in line to pay for them just like a person.
- It may not use any sensors tuned with greater capabilities than a human’s — its cameras and microphones must be analogous in ability to our eyes and ears, for example, and it may not use GPS, magnetic sensing, or infrared, etc.
- The robot and child must travel at least 30 miles in total using public transportation.
In my opinion, that’s the bare minimum an AI would need to do in order to even start the conversation about whether it was a “general” or even “human-level” intelligence.
Read more More Than Meets the AI here!