I Am Not A Robot. Or Am I?
Many of us are are used to filling out the ‘prove you’re not a robot’ forms on websites. The jumble of colored letters in different fonts are angled in various ways, making them slightly difficult, if not annoying, to decipher. CAPTCHAs, or “Completely Automated Public Turing test to tell Computers and Humans Apart,” rely on computers inability to recognize the shapes of the letters—something that’s easy for humans to do.
[In frog versus dinosaur, this frog wins.]
But writing this week in the journal Science, researchers describe an artificial intelligence technique that can easily solve these tests. The computer model, called the Recursive Cortical Network, is a system that’s more efficient at learning and generalizing about visual information. The researchers say it can achieve accuracy as good or better than state-of-the-art deep learning approaches while using around 5,000 times fewer training images. Scott Phoenix, co-founder of Vicarious, a California-based A.I. company, says that the CAPTCHA work is just a demonstration of a technique which could be useful for many other computer vision applications.
Scott Phoenix is co-founder of Vicarious, based in Union City, California.
IRA FLATOW: And now, it’s time to play Good Thing Bad Thing.
Because every story has a flipside. You know those Prove You’re Not a Robot forms on websites with a jumble of colored letters and different fonts going every which way? They’re called CAPTCHAs, for Completely Automated Public Turing Tests, to tell computers and humans apart. And writing this week in the journal, Science, researchers describe an artificial intelligence technique that can easily solve them without complex training other AI systems might require. It’s a system that’s much more efficient at learning about visual information. Joining me to talk about what could possibly be the good thing about that is Scott Phoenix. He’s co-author of that report in Science and co-founder of Vicarious in Union City, California. Welcome to Science Friday.
SCOTT PHOENIX: Thanks, it’s great to be here.
IRA FLATOW: So the bad news first? Are we done with CAPTCHAs? Are they all completely broken?
SCOTT PHOENIX: I mean, how is that bad news?
IRA FLATOW: OK, let’s do it from the good news side. That’s good news?
SCOTT PHOENIX: I mean, I think there’s is good news everywhere here. I think that we’ll have to enter a lot fewer squiggly characters that you really have to squint at to figure out whether it’s a B or an A. And I also think, in the process, we’ve learned a little bit more about how the human vision system works and how we reason visually.
IRA FLATOW: Now, I know this is not a new problem. Because you announced that you had solved this a few years ago.
SCOTT PHOENIX: Yes, that’s right.
IRA FLATOW: And so what you knew about this announcement this week?
SCOTT PHOENIX: Well, a few years ago, when we announced we could do it, we didn’t publish how. Because we wanted to give the outside world time to move on from making people enter squiggly characters. And so since we announced that we could do it a couple of years ago, Google and others have updated their systems, so that you now click pictures of dogs, or cats, or whatever it is, in order to prove that you’re a human. And so those tests, I think, are a lot more friendly to humans anyway than entering all these squiggly letters. And by giving the outside community some time to update their systems, we made it a lot safer for us to publish our research.
IRA FLATOW: So are you going to break this new system, too?
SCOTT PHOENIX: I mean, I think someday. In the long arc of history, artificial intelligence eventually is able to do all the things that humans are currently able to do. And so yeah, I think there will be this ever increasing set of tasks that AIs can do and these ever rising level of tasks and challenges that we give our robot or computer companions to attempt for us.
IRA FLATOW: And now, what made it possible for this system to be broken? What special magic is going on?
SCOTT PHOENIX: Well, I think it’s all about drawing inductive bias from the brain. So we’ve learned a lot about the way the brain works in the last 30 years. And that information hasn’t quite made it yet into most of the mainstream artificial intelligence algorithms that everyone uses. And so what I think about our work at Vicarious is sort of like arbitrage between what we know now about the brain, or we think we know now about the brain, and what doesn’t yet exist in the mainstream AI approaches.
IRA FLATOW: So the good news now is that this technique can be used for other things?
SCOTT PHOENIX: Yeah, exactly. So at Vicarious, we use it to power robots, who can manipulate objects and solve problems visually.
IRA FLATOW: So the robots could do things like picking up widgets, putting them, sorting them, and things, tedious tasks?
SCOTT PHOENIX: Tedious tasks, exactly, and sometimes dangerous tasks, too.
IRA FLATOW: Where do you see AI going now?
SCOTT PHOENIX: I think the future of AI is all about robots actually. I mean, we live in bizarro land right now, where all the ingredients that go in a robot– the motors, and the plastics, and the sensors, and electricity– are all really affordable. And nobody owns any robots. And that’s an AI problem. I think, once we can make intelligences that can control robots the way we control our own bodies, we’ll see a lot more robots. And they’ll be a lot more affordable.
IRA FLATOW: OK, Scott, thanks for taking time to be with us today.
SCOTT PHOENIX: So glad to join you.
IRA FLATOW: Scott Phoenix is co-founder of Vicarious in Union City, California. We’re going to take a break. And when we come back, we’re going to talk about death. No, no, nothing to be afraid of, no experiments on the program today. We’ll talk about different ways people handle death around the country, around the world, with the author of From Here to Eternity– Traveling the World to Find the Good Death. Caitlin Doughty is with us. So take a little bit of time out and come back. We’ll see you after the break.
As Science Friday’s director and senior producer, Charles Bergquist channels the chaos of a live production studio into something sounding like a radio program. Favorite topics include planetary sciences, chemistry, materials, and shiny things with blinking lights.