PCC Talks With The Author
About His New Book, “Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots”

By Paul Freeman [September 2015 Interview]

“You just can't differentiate between a robot and the very best of humans.” - Isaac Asimov, “I, Robot”

Though films and TV are bursting with robot-themed fiction, much of the public doesn’t seem to be fully aware of how prevalent AI, artificial intelligence, already is and how dominant it can become. John Markoff’s fascinating, informative, thought-provoking new book, “Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots,” should remedy that.

Publisher’s Weekly says, “A detailed, engrossing history of robotics…This revealing look at profound technological and economic developments will unsettle anyone who has a job to lose.”

Markoff, a San Francisco resident who was raised in Palo Alto, has covered technology since 1977. He has reported for The New York Times since 1988 on the science and technology fields.

He tells Pop Culture Classics, “AI was always a promise, but it was sort of always over-promised and didn’t deliver… until more recently, when it’s gotten more interesting.”

At the start of Markoff’s career in journalism, he reported on the potential of the internet. This was before the most of the media was paying any attention to the revolutionary innovation.

“When I made the shift from covering Silicon Valley to coming to the science section [at the New York Times], part of my argument to the editors was that internet and personal computing had transformed society in the previous three decades and I made the hypothesis that AI and robotics would have a similar impact over the next couple of decades. I still think that that’s probably true.”

In the book, whose title comes from a Richard Brautigan poem, Markoff establishes that Stanford has been at the forefront of AI development right from the start.

‘Shakey [the mobile robot developed in the late 60s] at SRI [originally called Stanford Research Institute] was the first truly significant effort to build an autonomous machine. Charlie Rosen [its project manager] built Shakey as the first experimental platform for doing AI research. Lots of technologies that are used by all of us, for example, A-star the navigation algorithm that’s used by most smart phones, came out of the Shakey project. And you can draw a direct line from the early speech research groups, done at the end of the Shakey project, directly to Siri. So it had a big impact on the world.

“At the same time, there was also interesting stuff going in at SAIL [Stanford Artificial Intelligence Lab]. They did the first robot arm there.”

In his research for the book, Markoff crystallized the fact that physicist/inventor William Shockley actually came to the area with the ambition of building early robotics technology. Shockey’s dream was at the root of Silicon Valley’s rise.

Markoff notes science-fiction’s influence on those designing robots over the years. But sci-fi films such as “Ex-Machina” and TV shows like “Humans” demonstrate that, while AI could bring amazing advancements to society, it might also spell disaster. Markoff’s book grapples with the key questions - Will we control the robots or will they control us? Will we become masters, slaves or partners?

“It seems that American society, in particular, every decade or so, has this period of anxiety about the impact of automation in all kinds of different ways,” Markoff says. “As early as the early 1950s, with the dawn of automation, those questions were raised and it seems to come back at regular intervals.

“I believe that we, as a society, are just as obsessed about robots as the Japanese. It’s just that the Japanese have a pure love affair with robots and we have more of a love/hate relationship. And it gets played out in the endless stream of movies. Science-fiction really shapes our view and sometimes it gets us significantly ahead of what’s possible. We come to expect robots to behave like “Chappie” or “Her,” and they actually are much more pedestrian at this point.”

Stephen Hawking, Elon Musk, Bill Gates and Stuart Russell are all warning about an existential threat from AI.

“It’s good that they’ve raised the issue, because we increasingly are going to have to deal with the consequences of autonomous machines. There has been demonstrable progress. However self-aware systems or systems with intelligence at or above human level - I don’t think we’re anywhere near to that.

“There are a lot of people around the Valley, people like Ray Kurzweil [futurist/inventor], Jeff Hawkins at Numenta, Bill Atkinson, one of the Mac designers, who really believe in this rapid acceleration which they refer to as ‘the singularity,’ which was a coin termed by a science-fiction author, who happened to be a computer scientist, Vernor Vinge. I’m a huge Vinge fan, but I think they’re wrong on the notion of the unbridled exponentials. I think that these things all become S-curves. And interestingly, right now, when they’re talking about this, we’re seeing the doors fall off of Moore’s Law [Intel founder Gordon E. Moore’s observation of the number of transistors in an integrated circuit doubling every two years]. It’s slowing down.”

John Markoff, photo by Leslie Terzian Markoff

In the 70s and 80s, there was a debate between a group of philosophers at U.C. Berkeley and the AI community. “People like John Searle and Hubert Dreyfus were very skeptical about the claims of rapid movement towards machine intelligence. I particularly like Dreyfus’ critique of Minsky [AI cognitive scientist Marvin Minsky], where Dreyfus said, ‘This is a little bit like saying, because we’ve gotten to the top branch of the tree, we’re making rapid progress to the moon.’”

Recent books like “The Rise of the Robots,” “Humans Need Not Apply” and “The Second Machine Age” argue that automation is going to have a dramatically disruptive effect on society. Markoff doesn’t agree.

“I don’t think that the impact will be as overnight as they think. I think it will be more gradual. And, though I was on that side for a while, I’ve moved to the other side. I think that they have not taken into account the demography and the fact that all over the advanced world, the population is aging. And that’s really going to change the equation both in terms of the size of the work force and the role of robotics. In 2020, there will be more people in the world who are over 65 than under five. For the first time in history, we have an aging population. And the notion of elder care robots is, I think, actually a positive one. My worry is the technology is not moving quickly enough to be around for me, when I need it,” Markoff says, laughing.

Self-driving cars have been much in the news. Markoff doesn’t think we’ll see those ruling the roads in the next decade. “But what about setting cars that don’t crash, as opposed to self-driving cars, as a design goal? And that, to me, is the way to think through this IA [amplified human intelligence] versus AI thing. So you can wrap technology around people and sort of protect them from themselves and give them a guardian angel or a driver’s ed instructor to watch them over their shoulder while they drive. That would probably be of more value, than to try to build a completely automated car.”

Morality versus self-interest and economics is a key debate. Part of that discussion involves deployment of autonomous weapons. Markoff cites author Isaac Asimov’s laws of robotics.

“The notion that robots shouldn’t harm humans is right at the core of that. Of course, we’ve skipped way past that. There’s group of people who have bastardized that and are proposing the value of automated weapons is such that you could build a machine that would not commit war crimes. So it would be a machine that would kill, but it would only kill enemy combatants, not enemy civilians. That’s really kind of creepy, when you think about what Asimov originally attempted. And there’s an activist group called Campaign to Stop Killer Robots that’s trying to push for a U.N treaty to ban these kinds of weapons. What I find especially disturbing, let’s say you design an ethical killing machine and your enemy doesn’t. What then? I think that’s an imponderable problem.”

Ultimately, our fate is in our own hands. “I’m optimistic, because it’s a human choice,” Markoff says. “I don’t think these machines are going to design themselves. I don’t think that there’s a new species evolving that’s separate from humans. These are very human decisions. I completely admit that that’s not a reason for optimism, given some of the things that are happening in this world,” Markoff says with a chuckle, “but I see good examples of design that are human-centered and it gives me hope that we can make human choices that use technology to improve the world.

“The solution is to come up with a synthesis of AI and IA technologies. And that’s the hope. Siri is a good model for a machine that’s a partner with humans. So it’s not impossible for us to do the right thing.”

Change is inevitable. Markoff says, “I don’t think that we can back away from these technologies. On a certain level, it’s quite remarkable. Last week, Facebook announced that a billion people used Facebook on a single day. And I remember walking into a small room on University Avenue [in Palo Alto], when Facebook was 20 people. I was visiting the guy who had rented them their space. He said, ‘Yeah, it’s a startup, really young kids.’ And they’ve just eaten the world.”

Markoff is excited about the progress in robots’ perception areas, particularly machines that see and listen, as well as advances in dexterity. Change is inevitable. Cloud-based programs will be making more and more of our decisions for us. Markoff finds that worrisome.

“If we give up our decision-making, there a great risk there. If machines are deciding our choices on everything from where to eat Korean food to which spouse to pick, we’re really into a different world than the one we were living in a few years ago. I think that’s starting to happen. And I feel uncomfortable about that, I guess.”