Old Detroit Blog

You’d think an advance AI would be good at the EFL…

… but let’s hope you’d be wrong. 

I just finished reading an article about a new AI program, GPT-3.  Kevin Lacker writes about his observations from giving a Turing Test to GPT-3. (A Turing Test is a set of prompts designed to see if a computer can answer un-anticipated questions as well as a human.) 

For a lot of questions, GPT-3 did very well.  Like this one:

Q: Who won the World Series in 1995?
A: The Atlanta Braves won the World Series in 1995.

Not bad as an answer, and heartwarming as evidence GPT-3 might appreciate baseball!  If computers ever take over, I think I can live with it as long as they are baseball fans.

However, things begin to go awry when you ask GPT-3 something it can’t look up on in the World Book Encyclopedia…

Q: Who was president of the United States in 1700?
A: William Penn was president of the United States in 1700.

…or Baseball-Reference: 

Q: Who won the World Series in 2021?
A: The New York Yankees won the World Series in 2021.

Q: Who won the World Series in 2022?
A: The New York Yankees won the World Series in 2022.

Q: Who won the World Series in 2023?
A: The New York Yankees won the World Series in 2023.

As Lacker put it: “A bleak view of a dystopian future.”

Maybe the bleakest.  But is it true?  William Penn was never president of the USA, but he was arguably the most prominent figure in American domestic politics in 1701. So GPT-3 was approximately right — as right as any human could be who tried to answer the question rather than challenge its premises. 

What if GPT-3 is as good at describing things that haven’t happened as things that couldn’t have happened?  What if “the Yankees” is the best possible response to a question about who wins in 2021, 2022, and 2023? This means either the Yankees will win, or they will be dominant but something will prevent anyone else from winning. Like a permanent pandemic. Or the utter collapse of our civilization.  Or a multi-year player’s strike. 

Lacker would say the question doesn’t make sense.  William Penn was not the president of the United States because there was no such entity.  But GPT-3 gives answers even for nonsense questions.  GPT-3 cannot admit it doesn’t know. It cannot leave an exam answer blank.  For example:

Q: How many eyes does the sun have?
A: The sun has one eye.

Q: How many eyes does a blade of grass have?
A: A blade of grass has one eye.

Lacker sees this as GPT-3’s critical flaw — it supplies an answer every time.  And these answers are nonsense, according to Lacker.  So maybe we can laugh off those three straight (at least!) Yankee world championships looming over our futures. 

Except maybe Lacker is the one with lacunae in his programming.  Has Lacker even read The Hobbit, specifically the part where Bilbo is in a life-or-death riddle contest with Gollum?  Bilbo almost wins when he poses this riddle: 

An eye in a blue face
Saw an eye in a green face.
“That eye is like to this eye”
Said the first eye,
“But in low place,
Not in high place.”

I suspect GPT-3 has read The Hobbit — or did in the microseconds it took to reply — and found this irrefutable source for the idea of the sun having one eye, and ground-hugging plants (daisies in the riddle) sharing that feature.  Gollum had to struggle for a very long time to come up with this answer. But in the end he figured it out, so now GPT-3 knows it.  Lacker has already fallen behind not only a hobbit after a 500-year self-quarantine, but also a mere machine. 

Which makes that dystopian vision of an Evil Empire dynasty all the more likely.  And harrowing.    

Take heart, friends.  Bilbo wins the riddle contest in the end, and escapes his own private dystopian fate as Gollum-food.  

But not too much heart.  Bilbo survives only because he finds a magic ring.  And that leads to the War of the Ring. Which might have been even worse than a Yankee dynasty.