Can We Compare Robots to Animals and Build Diverse Relationships With Them?

Women Leading Visual Tech: Interview with Dr. Kate Darling, Researcher at MIT’s Media Lab

LDV Capital invests in people building businesses powered by visual technologies. We thrive on collaborating with deep tech teams leveraging computer vision, machine learning, and artificial intelligence to analyze visual data. We are the only venture capital firm with this thesis. We regularly host Vision events – check out when the next one is scheduled.

Our Women Leading Visual Tech series is here to showcase the leading women whose work in visual tech is reshaping business and society.

Kat Darling.jpg

Dr. Kate Darling is an academic with a background in the legal and ethical implications of technology and a researcher at the Massachusetts Institute of Technology's Media Lab. She is a leading expert in Robot Ethics. Dr. Darling investigates social robotics and explores the emotional connection between people and lifelike machines. She studies what will be soon called “human-robot relationships”.

She received degrees in Economics and Law from the University of Basel. After a 2014 Ph.D. titled “Copyright and new technologies” at ETH Zurich, Dr. Darling returned to the US to teach a robot ethics course at Harvard Law School. She is a former fellow at Harvard Berkman Klein Center for Internet & Society, the Yale Information Society Project, and is an Affiliate at the Institute for Ethics and Emerging Technologies.

She has an honorary doctorate of sciences from Middlebury College. Her work has been featured in Vogue, The New Yorker, The Guardian, BBC, NPR, Forbes, WIRED, The Atlantic, Die Zeit, The Japan Times, and more.  

Dr. Darling’s first book, “The New Breed: What Our History with Animals Reveals about Our Future with Robots”, hits shelves today, on April 20, 2021.

Abigail Hunter-Syed and Dr. Darling spoke about our tendency to assign human characteristics to the machines in our lives. Has anyone else ever apologized to Roomba? How about cursed at Alexa? See how Dr. Darling thinks we should approach our relationships with robots...

(Note: After five years with LDV Capital, Abby decided to leave LDV to take a corporate role with fewer responsibilities that will allow her to have more time to focus on her young kids during these crazy times.)

The following is the shortened text version of the interview.

Abby: How would you describe what you do in one line?

Kate: I study human-robot interaction from a social, legal and ethical perspective.

Abby: You started with economics and law. What made you make a jump to robotics?

Kate: What has always fascinated me is how systems shape human behavior. Law and economics are systems that shape human behavior but so is technology. To be honest, I've always loved robots. The fact that I managed to create a job for myself where I could use my background in social sciences and play with robots all day is a dream come true.

Abby: Do you remember the first robot that you ever “met”?

Kate: That depends on how you define a robot. The baby dinosaur robot called the Pleo made the biggest impression on me and that kickstarted my career shift to robotics.

I bought this toy in 2007 when it came out. It has motors, touch sensors and infrared cameras. 

Pleo-robot-toy.jpg

The dinosaur can move and mimic emotions. It can mimic pain.

When I showed it off to my friends, I would have them hold the robot up by the tail because this way it'll start to cry. At some point, I realized that it was bothering me when they did that and I would tell them to put it back down. I knew how it worked but I still had feelings and empathy for it. That made me start thinking more about, "Oh, what does it mean that we have these lifelike machines that are coming into our lives and increasingly coming into our shared spaces and they can mimic these cues that we respond to, even though we know that they're fake?" That started my interest in human-robot interaction.

Abby: I have a five-month-old and when he cries, the hair stands up on the back of my neck and I have a completely visceral need to comfort him. A robot or an AI toy can mimic that and elicit that same response - is that the future? Is that where we're going?

Kate: The future is in some ways up to us. We can decide how to design these machines but we are starting to see that it's easy to get people to treat robots like they're alive. People will name their Roomba and they'll feel bad for it when it gets stuck. If you do that even with a disk, then you can imagine robots that are designed by Pixar animators or people who know how to create emotionally compelling characters. 

We respond to that, like you said, on a deeper biological level. That's not us being unfamiliar with this technology. It's the same way that we respond to animals and other non-humans. We project human emotions onto them. We're social creatures and we love to respond to social cues.

We also respond to the physical movement that robots have because our brains are hardwired to separate things into objects and agents in our environment. We have these objects that move like agents and we subconsciously treat them like they have intent in their movement and that they have human emotions or animal-like emotions. That is the future whether it's a good or a bad thing or how we should design it. There are a lot of questions that are up to us.

Abby: I have definitely apologized to my Roomba! This idea of an object that moves human-like, is this what you mean when you talk about life-like machines?

Kate: Yes, and it's something that can be designed intentionally like the Pleo robotic dinosaur, and it's something that can happen unintentionally, like the Roomba. Just because it's moving around in your space on its own might cause you to apologize to it or treat it a little bit like you would treat a pet. We are seeing increasingly life-like machines whether they're intentionally designed to be that way or not, that's how we perceive them. Now that robots are moving from being in factories and coming into workplaces, households, public spaces, we're going to see a lot more responses like that to them where we're treating them differently than toasters.

Abby: Robots can be everything from on the manufacturing line stapling a car together through to the autonomous vehicle that's driving you around but there's an immense difference between a lifelike companion robot and a robot that's sitting on an assembly line for example.

Kate: Totally. Part of what my book is about is making a comparison to animals because I feel like those are the other non-humans that we've dealt with, where we treat a lot of them as tools and products, and some of them are companions. We've made the same types of distinctions that I think we're going to start seeing with robots as well.

Abby: It's interesting because of this idea of applying our emotions to animals or something that we've become so familiar with. I think my dog feels sad at times. That's probably me projecting onto him. Robots actually have personalities - that we've programmed into them - and a range of emotions. Do you think that they're going to become even more companion-like than our pets because they're built to replicate us and our feelings as opposed to an animal?

Kate: To me, it’s still an open question: are we trying to recreate ourselves and human emotion and human intelligence? I would argue that that's quite limiting and boring. We can create something new. I called my book, “The New Breed”. We can create a new relationship. We can design it to be what we want.

It doesn't make sense and we're not currently capable of recreating human intelligence or human skill, especially social skills. These machines perceive the world differently than we do. I would argue that we shouldn't even be trying to go in that direction but rather should be thinking about this as a supplement. It's not going to be like an animal. 

There are lots of differences between animals and robots. You can't dictate an email to an animal. At the same time, animals are better at navigating physical spaces than robots are. 

In the same way, as animals have supplemented our social relationships, instead of replacing them, I feel like that's how we should be building social robots as well. We can think of how we can partner with these technologies. How can they be beneficial to our goals, rather than trying to recreate ourselves?

Abby: Do you think that there's also the opportunity that robots could teach us something? We have smart home devices with Alexa or Google Assistant. I ask them questions all day long, they teach me what the weather is outside before I open the door... is there a bigger, more compelling case for that?

Kate: There are certainly a lot of different use cases. You were saying about Alexa and I remembered that the other day Alexa told me that the animals in the zoos were sad because no one could visit them due to the pandemic. It's like, "I didn't need to know that, Alexa, that made me sad” but as for what we could use robots for, we already are seeing compelling use cases in health and education, for example. 

There's research that has been going on for over a decade now. I'm not involved in this but some of my colleagues did research with children on the autism spectrum and it turns out that robots are a new tool that can engage kids in new types of therapies in ways that we haven't been able to with animals or with people because the robots occupy this interesting space where the kids treat them like social agents but they also know that they don't come with the baggage of people and you can get the robot to talk like an animal can't talk, or you'll practice social skills with them in the same way.

Other applications use robots therapeutically as animal therapy replacements. They are soothing for dementia patients. There are also some questionable use cases. Unlike animals, robots can record your conversations. You were saying that animals sometimes have their agenda. People who create robots can sometimes have their agendas as well, and that's even more hidden behind the scenes. 

Abby: Do you have any qualms about having robots as part of your family, be interactive with your kids and listen to everything that you are doing daily to decipher the world around them?

Kate: I have mixed feelings about the robots we have in our home because like you said, a lot of them do have the capability to collect information. The data is stored somewhere in the cloud and certain companies have access to it. I have to have these technologies because I can't speak about them if I'm not engaging with them. 

My kid is only three years old but as he gets older, we're going to try to teach him more about what is happening in these robots and teach him more about what AI is capable of, what these microphones and cameras mean, what happens to the data so that he grows up tech literate in this world.

Abby: Our daughter, who is also three, has never lived in a world where your house couldn't respond to you. She walks into her room and she says, "Okay, Google, turn on the lights." And the lights turn on. She knows how to interact with her house and inanimate objects around her. One of the big things that we’ve been conscientious about is making sure that she interacts with them politely, and still assigns a measure of respect to them. But I do wonder how important is it to model this ethical behavior with inanimate robotics?

Kate: We don't know how important it is because we don't have enough research on how these systems change people's behavior. What we do have is a little bit of research showing that the way people treat lifelike robots tends to reflect their general senses for empathy or how social they are but we don't know if it has an impact on kids. 

We don’t know if kids get used to barking at Alexa or the Google Home, and going to bark commands at people. There's enough concern about it, especially among parents like yourself. So many parents complained to these companies that both Amazon and Google have released the add-on features that you can turn on where they make you say “please” and “thank you”.

People are starting to realize as we interact with these devices and as they get better at mimicking actual human social conversations or mimicking life in general, that, "Oh, maybe it's not okay to kick the robot, even if it can't feel anything, because that feels wrong. And maybe that's muddled in my subconscious and could make my kid more likely to kick a real dog or an animal or another kid." So there are some questions that we don't have answers to but they're worth asking.

Abby: Do you think that robots will ever fully understand what humans go through? Like PMS, grief, excitement... How well are they going to be able to interpret our emotions in your opinion?

Kate: I never say never. This is why I like the animal comparison so much, it's like asking the question, "Does your dog understand when you're upset?" Dogs can pick up on you being upset. They understand some piece of that, but they probably don't understand the full range of human emotions that you're experiencing. Probably the same is going to be true for quite some time for robots and AI where they can pick up on cues, they can respond to cues, they can mimic things, but they won't understand the world or feel the world the way a human does.

Abby: Here at LDV, we look at businesses that leverage visual data. A lot of times they're using computer vision, machine learning and AI to understand things like video, photo, LiDAR, radar, etc. In this idea of creating life-like devices that can at least recognize when we are upset or excited, how important is it to have a visual tech feature built-in?

Kate: What do you mean by “how important”? There are different ways that you can measure emotion. You can look for acts like people's voices or body language. There are also a lot of problems with these technologies because people who build them often include our human and cultural biases.

Many people are doing work on bias in artificial intelligence. Some of the huge problems that we're already seeing in facial recognition where these systems are trained on mostly white faces and then can't identify black faces for example. 

The most important thing is to understand the limitations of the technology, rather than believing that we can do anything with AI, or we should do anything with AI. Just because we can doesn't mean we should necessarily either.

Abby: We had an earlier interview with Timnit Gebru, former lead of Google’s Ethics in AI group,  where we discussed similar topics. At LDV, we think a lot about the way in which computer vision & facial recognition are integrating the bias of society today into the world of tomorrow. If we can diversify our dataset through video then installing a camera onto nearly every robot that's going to be existing in our house might be the way. Whether that's good or bad is still up in the air, as you said, but all these virtual assistants keep learning from us regardless. For them to be better, we're of the opinion that they need to be able to see us - in the same way, that I can see your reaction to the things I'm saying and process it to make a better question or a better response. 

Kate: I do see that there's a lot of incentive to push in that direction because the functionality of these machines is often realized directly on being able to collect that data. All of the industry and all the world is moving towards trying to collect as much data as possible.

Timnit Gebru and Joy Buolamwini’s Gender shades project is calling out some of the huge problems that exist in facial recognition technologies. They ended up forcing companies to rethink and change some of their processes. There's so much incentive to collect all of this data and not a lot of incentive to protect people's privacy or think about these bias issues. Those harms are less visible to us than the immediate functionality that we get from the robot.

How are we as a society going to push for companies to be truly responsible and think very carefully about how they're developing these technologies and about whether that's the right direction to go in? I'm grateful that there are people like Joy and Timnit out there. I'm sad that Timnit got fired from Google because they need people like her.

Abby: Totally agreed. In your book, you mentioned the idea of diverse kinds of relationships. Can you shed some light on how you're thinking about it?

Kate: On a social level, we're often talking about robots as human replacements, whether that's in our science fiction stories or our backyard conversations. The first question our roboticist gets is, "Oh, is this meant to replace a teacher, a caretaker, a human?" 

It’s especially on point when your book and your baby drop on the same day!

It’s especially on point when your book and your baby drop on the same day!

That's a little bit of a red herring because not only are robots a terrible human replacement and will be for a long time, that's also not how we should be building them, and that's not what we should be aiming for.

The animal example is great because it shows how social we are as creatures, how diverse the relationships that we can create and how those relationships can supplement each other. People aren't worried about you being antisocial because you have a dog at home, right? We are somehow very accepting of the idea that we can partner with animals and humans and that doesn't take away from each other. Robots can fit into that pretty seamlessly as well. We need to be responsible about how we design them and that’s what we should be focusing on, instead of this question of, "Are they going to replace us?"

Abby: If you had input on how these robots were designed to make them better and to help augment us or improve humanity as we go forward, what are three things that you would ensure are a part of every robot that's developed from here on out? 

Kate: It's a tough question because there's such a range of what robots can be used for. It depends on the use case. There are some use cases where I wouldn't want the robot to act socially. There are some use cases where I would want the robot to act socially but not look like a human.

Abby: What's an example of a robot that you wouldn't want to be social with?

Kate: If a robot is supposed to function as a tool and you have people relating to it on an emotional level and that can be anything from inefficient to dangerous. It also sometimes creates social aspects in robots that cause people to trust them or assume that they can do certain things. If the robot is just a glorified calculator, you don't want that mismatch in assumption and what the robot's there for. 

Tool robots should be tools and social robots should be social. There's a whole in-between-phase where you can use social aspects or where people are going to treat it like a social device no matter what you do. 

There are cases where it depends on what you're trying to do. We're not aware enough yet of how differently people treat robots than other devices and how much we need to design while taking that into account.

Abby: I won't apologize to Roomba again.

Abby Hunter-Syed and Kate Darling.png

Kate: That one I don't mind. Here's an example. You can see grocery store robots in Stop & Shops. That specific robot I’m talking about is named Marty, and Marty looks like a giant penis.

Image credit: Badger Technologies

Image credit: Badger Technologies

One of the students that I work with at MIT noticed that everyone hates this robot. She noticed all of her friends and family are like, "Oh, these robots were in the way, and I hate it." She did the sentiment analysis on Twitter, where she was looking at spikes in negative or positive mentions of Marty. She found the most negative mentions of this robot happened when Stop & Shop celebrated the robot's birthday with free cake and balloons when they made a big deal about it being social.

If you make the robot look social, you put googly eyes on it, so it has a face but then people are annoyed that it's in their way. They're like, "Why can't it see me? Why can't it move?" 

If it looked more like a machine, then maybe people wouldn't be quite as annoyed by it. They're expecting it to behave better because it has a face. That's one example of trying to be aware of the use case.

Image credit: Hotel EMC2

Image credit: Hotel EMC2

Abby: I heard a similar story about hotel EMC2's room service robots. The initial version of that robot had two eyes, a smiley face and a screen. It freaked everybody out. Everyone was like, "I don't want to get in the elevator with this thing” or “This random thing with a giant smiley face showed up at my door." With its current look, the robot is much more successful because people aren't as freaked out by it.

Kate: It's such an interesting time that we live in because the robots are coming into shared spaces and we're seeing the growing pains. We're seeing all the mistakes happen and seeing the industry learn from these mistakes.

Abby: If there was one thing that you want people to take away from your book, what would it be? Let me give you an example of somebody who's reading your book. What if it was for the person who's developing the next inventory robot for the grocery store? 

Kate: I’d want them to think about whether they are subconsciously trying to recreate a human task. If they think outside of the box, they can create something better.

Abby: We look at a lot of robotics companies because robots need to be able to grab items and do other things too. We have a portfolio company called Voyant Photonics, which builds a LiDAR on a chip. We think that LIDAR is going to be at the fingertips of robots in the future to help them understand the dimensions of objects… but this idea that the hand is the best tool for everything we do is false, right? When we started to think about it, we questioned why would you develop a robot with hands, if hands can't reach that thing that fell behind the desk or it can't automatically screw that piece back onto that machinery or whatever it is? It might be better to think outside of the human form.

Kate: Sometimes people argue, "Oh, robots have to look like humans because we have a world built for humans." We have stairs and narrow passageways and things that they have to grab. That's too nearsighted because if you think about building a world that's wheelchair accessible for example, now suddenly you've killed two birds with one stone. This is something that Laurel Riek, a roboticist at UC San Diego, once said to me, "Well, if we designed for wheelchairs, then you could have much cheaper robots with a wider range of abilities and you would make the world more accessible for people." Thinking outside the box is important on every level in technology development.

Abby: Everybody most likely saw Boston Dynamics’ robots dance to celebrate the end of 2020. It was great to see this idea that it brought some human-like personality to a bunch of robots that don't look like us. They're generated to act like us in the way that they walk but they try to be better than humans and make it a better world.
My last question for you: if computers, robots, or AI didn't exist, what do you think your career choice would have been?

Kate: I went to law school because I wanted to be a criminal defense attorney so maybe that's what would have happened.


Kate Darling’s book “The New Breed” is out, go get your copy and let’s discuss it!