The Conscious Home is Achievable in the Next 15 Years: Your home will leverage visual sensors to be smart enough to understand what you want or need

In no order: Jan Kautz, Director of Visual Computing Research at NVIDIA; Gaile Gordon, Senior Director Technology at Enlighted; Chris Rill, CTO & Co-Founder of Canary at the 2016 LDV Vision Summit ©Robert Wright/LDV Vision Summit

In no order: Jan Kautz, Director of Visual Computing Research at NVIDIA; Gaile Gordon, Senior Director Technology at Enlighted; Chris Rill, CTO & Co-Founder of Canary at the 2016 LDV Vision Summit ©Robert Wright/LDV Vision Summit

Join us at the next LDV Vision Summit. Visual Sensor Networks Will Empower Business and Humanity panel discussion is from our 2016 LDV Vision Summit.

Moderator: Evan Nisselson, LDV Capital with Panelists: Jan Kautz, Director of Visual Computing Research at NVIDIA; Gaile Gordon, Senior Director Technology at Enlighted; Chris Rill, CTO & Co-Founder of Canary

Evan Nisselson: Thank you very much for joining us. You heard and saw my recent article about Internet of Eyes being larger than IoT market opportunity. Before we jump into that, I'd love to hear two sentences from each of you. What do you do at your company and what does the company do? Gaile, why don't you take it away.

Gaile Gordon: Okay. So, I'm at Enlighted. Enlighted introduces dense sensor networks into commercial buildings through lighting. We introduce dense sensor networks into commercial buildings through lighting. We use the sensor networks to control how much energy is used by the lighting, which is what pays for that venture, but it also produces interesting data sources which are used for HVC control and space planning. I've been there since January and I work with the CEO on the next generation of sensors and applications that run on that network.

Jan Kautz: I lead the visual computing research group at NVIDIA. That is to say I do the research which my team on computer vision and machine learning. NVIDIA probably doesn't need a lot of introduction. You all probably know graphic cards are what we're known for. Recently, we use them more and more. Generally, we sell them to all types of markets, including self-driving cars to cloud and so on.

Chris Rill: I'm one of the founders and CTO at a company called Canary here in New York City. We build devices that protect your homes and other environments. We've packed an entire security system; an HD camera, microphone, and light safety sensors into a device the size of this bottle of water. We connected ones to AWS and we send you alerts when we detect anomalies in your environment. You can control this system all from your smartphone.

Evan Nisselson: So, one of the things that I love is a very diverse panel from three totally different perspectives and that can be challenging, but there's also going to be a lot of synergies. Gaile, why don't you kick it off and tell us a little bit about what's a smart commercial building?

Gaile Gordon: The primary thing that makes a building smart is that is has sensors. It has a way of reacting to what's going on in the environments. Using, for instance, where the people are in the building to control at a very fine degree how much energy is used by the lighting and also how much light is coming in from the windows to change the dimming levels, etc. But more than that, the source of data that you use to study what the behavior patterns in the building are, which is really interesting thing for other applications.

Evan Nisselson: For what?

Gaile Gordon: For space planning, for instance. So, to make sure that you're using the building to its best efficiency. That your conference rooms are being used, that they are sized appropriately. Then, I think going forward, there's going to be a lot of new interesting things that we can do with the network that's already there. Pay for it through the energy savings. To do things like, active tracking, indoor location, lots of really, really, interesting things that I think people will find valuable to their daily lives.

Evan Nisselson: What's the main ROI for a building to start using these sensors?

Gaile Gordon: The energy savings that is introduced is about 75%.

Evan Nisselson: Okay.

Gaile Gordon: So, it's a no-brainer.

Evan Nisselson: Right.

Gaile Gordon: As opposed to previous applications which studied where people were in the building, which had to first pay for putting that network there and sometimes that was obvious in terms of its value and sometimes it was not. But in this case, it's completely obvious. The money is immediately covered through the energy savings. Which gives really interesting business model to figure out what else to do with the platform that's there.

Evan Nisselson: Right. Chris, take it away from there. Evolution of the sensors in commercial buildings, you're obviously more focused on the home. Tell us a little bit about some of the sensors, in addition to the camera, that are on the Canary devices.

Chris Rill: Sure. When we started designing Canary about four years ago, we knew that video and audio is so important because of our smartphones and the fact that we have been looking at these really crisp images. We wanted to give a better picture of what was going on within the home. That's really the things you can't see, especially when you're not in that environment. So, the temperature, humidity and air quality were really important for us to really understand the context of that environment. For situations where you're monitoring, say an elderly parent or a child, understanding their comfort and understanding the well-being of that environment goes far beyond video and audio. That's one of the reasons why we included those additional sensors. So, that you were almost telepresence in those environments that you were monitoring.

Evan Nisselson: So as a co-founder, why did you start this business? I'm sure there's been some surprises in the technology challenges. Why'd you start it, in a short sentence or two. What's the biggest surprise once you started connecting all this data, because it relates to the signals that are either useful or not.

Chris Rill: For me, it comes from personal experience. About six years ago, my apartment was broken into while I was living abroad. I sought a system to monitor my apartment. Like any engineer, I went to the store to look for something that I could put in my home and it was a local hardware store. There was nothing that you could simply buy and place in the corner of your home. So, I bought these sensors you stick on the windows and doors and I hacked together a camera and a server in the cloud. That was my security system.

Today, companies like Canary are enabling everyone to do that. Not everyone's an engineer. Not everyone has the resources to go and put that system together. That's one of the reasons why I got connected with my co-founders, Adam and John, and why I'm so passionate about using technology to understand what's going on in environments, because I forever have that baggage because of the trauma in my life.

On the question about surprising; I would say it's been so surprising to see how hard it is to build a consumer product company. Not just a device, but to have it be available and working at the quality it needs to work all day, every day. I think that's something, especially, something I'm passionate about, which is security. Which is securing these products at scale, because, I'm actually terrified of the internet of eyes. Because I know it's not just about the algorithms, but it's also about the security of the information that these algorithms are analyzing.

Evan Nisselson: Jan, you're kind of perfectly sitting on the panel in-between both of these opportunities. I liked how you orchestrated that. Tell us some use cases that you're working on today that will relate to both of these. And maybe one that you're most excited about in working on with your team.

Jan Kautz, Director of Visual Computing Research at NVIDIA ©Robert Wright/LDV Vision Summit

Jan Kautz, Director of Visual Computing Research at NVIDIA ©Robert Wright/LDV Vision Summit

Jan Kautz: We recently started working on, which really relates to both of these cases. Using visual sensors to do activity and action recognition in videos. One of the newest cases at home, might be, you have lots of cameras in your home and although you might be able to monitor what's going on remotely, for instance you have an elderly parent that you care for and you're not at home. Your parent falls, you wouldn't know, but your sensors are able to recognize that your parent fell. Call the ambulance directly. Those are use cases that wouldn't be possible otherwise. Those are the things that I think will make a big difference in people's lives in the future.

Evan Nisselson: Is that something that's possible right now? Or are we talking five years out or eight years out? Without giving away the secrets. I mean, how soon is this? Because Chris is smiling. He says, "I want to use that."

Chris Rill: When can I have it?

Evan Nisselson: Exactly.

Jan Kautz: It's a question robustest, right.

Evan Nisselson: How do you define robustest?

Jan Kautz: How many false alerts are you willing to deal with?

Evan Nisselson: So, Chris, how many false alerts are you willing?

Chris Rill: Oh, man. Well, as an industry, security is 99% wrong. So, there's a very low bar that you need to meet-

Evan Nisselson: So, you want it today? That's what you're saying?

Chris Rill, CTO & Co-Founder of Canary ©Robert Wright/LDV Vision Summit

Chris Rill, CTO & Co-Founder of Canary ©Robert Wright/LDV Vision Summit

Chris Rill: Yeah, today. All joking aside, we've seen with the application of computer vision, that we've been able to reduce false alarms by up to 80% and it continually gets better as we get better labels and better data to train our models. So, I 100% agree that false alarms, with that, the first time you freak out because your mother, you think your mother or father falls, you're going to cry wolf and eventually going to shut the system off. That's what traditional security has kind of been having to deal with for years.

Jan Kautz: There's still some way to go.

Evan Nisselson: Okay. Give us another use case while you're on the hot seat. Give me another use case that you're really excited about.

Jan Kautz: I think the other one is self-driving cars. I think that's going to be a big use case for sensors. Sorry, it's not just visual sensors, wherein your case, there will be additional sensors on cars. But it will be a big change to the way we live as well. In 10 years time, everybody will have-

Evan Nisselson: Okay, so today most of the cars that are on the road, how many sensors are in the car?

Jan Kautz: There's still a lot of sensors.

Evan Nisselson: Roughly, what do you think?

Jan Kautz: There's radars, ultra-

Evan Nisselson: Are there 10, 20?

Jan Kautz: 10.

Evan Nisselson: Okay. In 10 years or 15 years, how many will be in the car? Or just one controlling more?

Jan Kautz: No. There will be more sensors. There is disagreement which ones and how many. There will be more, but which ones and exactly is unclear. I'm betting on cameras and radar. Not LiDAR, because I think they make your car look ugly because they're big and bulky.

Evan Nisselson: Just because of a look factor?

Jan Kautz: No one wants to put a big spinning LiDAR on top of a car.

Evan Nisselson: Right, right. Okay.

Gaile Gordon: It doesn't need to spin.

Gaile Gordon, Senior Director Technology at Enlighted ©Robert Wright/LDV Vision Summit

Gaile Gordon, Senior Director Technology at Enlighted ©Robert Wright/LDV Vision Summit

Jan Kautz: It doesn't need to spin. Right!

Chris Rill: There was a Kickstarter campaign that did a small LiDAR unit about $200. I'm not sure how it did. Anyone back that-

Evan Nisselson: There's a bunch of them working on it.

Gaile Gordon: I think it's safe to say there will be 3D sensors on the car.

Jan Kautz: Yes. Which form they take is the question.

Evan Nisselson: What are the options? Cause a technical audience here, so. A technical first day.

Jan Kautz: It could be stereo cameras, that's one way. LiDAR is the other way. We might just need more of them if we don't have a spinning LiDAR, but you could do that.

Evan Nisselson: So, Gaile talk to us a little more about the technical side for the very technical folks in the audience. Because it is our first technical day. What is the capacity of the sensors you have now? Talk to some of the challenges that you're seeing today and in the near future.

Gaile Gordon: So for Enlighted, the sensors that we have now are relatively simple sensors. They're based on thermal data, ambient light, temperature. Things like that. They're already quite powerful. But the challenge's that we see, lighting has to be extremely interactive and all of the competition that you're going to be doing is local. So the big challenges are doing the processing locally, so that when you walk into a room, it reacts quickly. When your network is down, if you happen to have a network interruption, your lights are still working. So, that's processing on the edge, is probably the biggest issue and security, I think is the next one. You don't normally think of hacking lights, but that can have very broad impact. You don't want your smart building system to be hacked, but you also want people to be able to have personal control over their environment, and so the push-pull there is another huge issue that we have. I think getting more advanced sensors into our networks at a cheap enough and pervasive enough level is the next challenge.

Evan Nisselson: You're sitting very close to someone that might be able to influence that. What would you ask for?

Gaile Gordon: Well, it has to be cheap.

Chris Rill: A discount.

Gaile Gordon: I think that's the primary issue, right? The price point that we're currently at, you'd have to be able to form at least the tasks that you can do today, better.

Evan Nisselson: So, just for perspective, how many sensors are on the Canary? How many visual and how many total?

Chris Rill: So we have about, well it depends on what you consider visual.

Evan Nisselson: Your definition.

Chris Rill: Camera, so we've got one.

Evan Nisselson: Okay, so anything that can see thermal, see anything that is not actually a camera as we know it.

Chris Rill: Yeah, correct. We have one camera and then first thing, we've got many different sensors. We have about half dozen of sensors that we use. Some of them we use for user value. Like our temperature and humidity. But also, ambient light sensing so we can turn on night vision and other such features. But the main one's that the user can see are CNR, APAR, the camera, temperature, humidity and air quality.

Evan Nisselson: Talk to us a little bit about how do they talk to each other? Are they triggering actions locally on the device? Are they communicating back with the cloud? How is the smartness connected?

Chris Rill: Sure. Today we leverage the sensors on device to change the internal state of the Canary device itself. We have algorithms that interpret the visual data to try to understand at a very low compute, because the Canary device only has about, since we're talking tech today, only has about 400 megahertz of computational power. So you know, you can't really do a lot there.

So, we try to understand what's going on in the Canary device visually and then we upload that to the cloud for further contextual analysis to try to understand whether we have people, pets or just background noise like a shadow or a light change. Then, from there, if we do detect that there's something of interest for you, we will send you a notification and let you know what's going on.

Some of the other sensors that we use that are really about understanding the environment are temperature and humidity. If the temperature dips or the temperature spikes or the humidity changes more than between 30 to 50%, which is really the comfort zone for the home, we'll send you a notification to say, "Hey, we see that your humidity is low." From an air quality standpoint, that's another whole bag of worms. What is air quality? Is it a pollutant? Is it carbon dioxide? I'm sure people in offices really want to understand what air quality is. So, that's really a qualitative sensor for us to really understand. Is the air changing? So, things like cigarettes and cooking actually influence that. But today, it's still really-

Evan Nisselson: Cooking in the home, you mean?

Chris Rill: Oh yeah.

Evan Nisselson: When a steak smells really good it effects Canary?

Chris Rill: Yeah, I'll get an alert sometimes that says your air quality is very bad. Which, for me, is actually a good thing.

Evan Nisselson: So that brings up the perfect segway of will it tell you when that steak is ready? Medium-rare.

Chris Rill: We're working on it! R&D. One day! There are connected frying pans, though.

Evan Nisselson: So, that's kind of where, you know, that's the interesting things that I think the connection of different signals, not only from Canary, but from the phone and other things that maybe interact with Enlighted or other companies. Where the APIs that we are used to online are then going to become more of the APIs of internet things and internet of eyes. Give an example of what would excite you.

That's the opportunity, really, is the synergy for playing Bingo today. The synergy between all of these different signals. Today I would say we don't do a really good job at integrating all of the different signals in your home and all of the different signals that are publicly available to add the right context.    

-Chris Rill

Chris Rill: That's the opportunity, really, is the synergy for playing Bingo today. The synergy between all of these different signals. Today I would say we don't do a really good job at integrating all of the different signals in your home and all of the different signals that are publicly available to add the right context.

One thing that really excites me that, I don't know if I've talked about this publicly, I have talked to some of you about this, but it's using the excel orometer on Canary. I believe that we will have the largest seismic activity detection system in the world because of the number of units we have deployed, but we haven't yet started looking at that excel orometer data and correlating that with the seismic readings that we get from the different government agencies around the world.

I would say that from an interactive perspective with all of these other companies, we have an integration with Wink. They're an OS for the home that you can control your lights and your thermostat and add a few different sensors. We have an integration with them and surprisingly there are a lot of people who have started to integrate with Canary and with Wink. That's something that we're not super focused on because we feel like the opportunity for us is to empower people with data. Meaningful insights from the environment around them. The control of things is really just a small tangential niche that we're eventually going to get to, but there are companies like Apple and Google and Samsung. They are all fighting over your light switches and your refrigerators and all those other things in your home that you may or may not have. The information in your home, everyone has that and everyone should have access to that so that they can make decisions about the changes that go on.

Evan Nisselson: Gaile, similar to Chris, give us a perspective in the photograph that was in my presentation that came from Enlighted, which is the heat map of the office. A perspective of how many sensors are there that you guys are part of the network and is that going to exponentially increase or are sensors, the next question shortly you'll hear, is it going to decrease but have more power? So you don't need to have zillions. So, give us a perspective.

Gaile Gordon: The Enlighted network is basically spaced every 10 feet. So, there's a sensor package every 10 feet in a grid around the facility. Because every fixture essentially has a sensor package in it, so it's very dense. However, there are other things in the office as well. An RF is one of the big things that is also intriguing. We're here talking about visual technology, but we also have to understand that the sensor fusion solution will be very interesting. You can do a lot of things that traditionally the cavision community had tried to do only with vision sensors, but now that everyone is carrying around a very powerful computing device with radios on it, a lot of things like indoor location can also be either done entirely or augmented with RF as well.

Evan Nisselson: Just as an example for those that are not technical in the audience, when you say sensor packages, what does it looks like? Is it a little box? Can we see it? Is it very small, how-

Gaile Gordon: Yeah. The Enlighted fixtures you can barely see it. It's probably the size of a dime, is what you see. It's not very noticeable.

Evan Nisselson: I assume they're going to get smaller. Which brings me to Jan with the question of: When will these sensors be painted on the wall? So you actually can't see them, so that I'm sure, Gaile's company would love to network the paint-, wall with thousands of sensors inside of them and they can actually talk to Canary.

Jan Kautz: Well, as you mentioned, Apple patented the 2 millimeter camera, so you can place lots of cameras if they're the size of 2 millimeters. You can easily make cameras the size of a dime and place them all over your house. It will be possible. There are questions about security that becomes a real issue at that point. Do you want to stream all this data to the cloud? I would be very hesitant to stream from my hundreds of cameras that are in my house to the cloud.

Evan Nisselson: Okay, let's talk about it for a second. It's the big issue. What's the difference between streaming two cameras or a thousand cameras? Aside from the, on a security level? It's either secure or it's not.

Jan Kautz: The more cameras you have the more they see, right. If you place it just in your living room-

Evan Nisselson: It's less about the vantage point of how many you have. So, if you know where your sensors are or cameras are in the house you can do the things you shouldn't do in a different room. Your issue is, if they're everywhere-

Jan Kautz: If they're everywhere, if I want a smart house, right. If I want to see everywhere in my house so I won't have any blind spots. I don't want my elderly parent to fall in the one blind spot in my house.

Evan Nisselson: Right. So you can't have both. You have to have them everywhere-

Jan Kautz: You process it at home.

Evan Nisselson: Is that going to happen?

Jan Kautz: I think so.

Evan Nisselson: You think so, Gaile?

Gaile Gordon: Yeah, I think it always-

Evan Nisselson: So, what if you processed at home and kept at home, in theory, is safer, which it really isn't if it's connected to the internet, versus on the cloud which is the same thing?

Gaile Gordon: Well, I think one of the things we've gotta think about is the data ownership. Who owns the data and I think in 10 years you might see the camera as not being in the environment but on you. Because when you own it, when you control the cameras you own the data and then the question is a little bit easier to ask.

Evan Nisselson: That's right. Chris.

Chris Rill: I was going to add that the companies that are building these products should build in privacy and control, so that if you don't want cameras on you should be able to turn them off and trust that they're off. But there are other sensors that you can put around your home that still respect your privacy. Trying to understand if someone falls, well there are other types of cameras that you may not really care, necessarily, about the information and if it's going to the cloud. But I do agree that when we look at edge computing, that as semiconductors become less expensive and consumer product companies can put more compute at the edge, that allows us to do some very interesting things both respecting privacy and providing the value and context you need from these signals.

Tickets for our annual 2017 LDV Vision Summit are on sale
May 24-25 in NYC

Evan Nisselson: We've got about eight minutes left, so, we're gonna keep on talking here. Who has questions in the audience? Raise your hand and we've got mics that are going to be passed back there, but, there you go. We're ready already. Let's dig in, go.

Audience Question 1: Hey. My name's David Rose from Ditto Labs in Boston. I have a quick question about Canary. Once you have all these perches in people's homes, do you send video to any other cloud-based APIs for analysis? Or do any of your competitors?

Chris Rill: We do not. There are many competitors now. We've been around for four years, so people have kind of gotten word of us on Indiegogo a couple years back, so I don't know about any other companies that are sending their data to other third party APIs. I do know that there are, kind of, even in this room, other companies that do analysis to try to add context to video, which we do in-house.

Audience Question 1: Gotcha. Is that because of latency concerns? Or why wouldn't you send data to other cloud-based APIs for other services?

Chris Rill: Because, as a company, we do everything in-house or we try to do everything in-house. We have a computer vision team and we want to make sure that that's a defensible intellectual property that we have, so that if you're using a third party, you're gonna have to pay for that.

David Rose: A secondary question is, since you do have cameras in the homes, do you see that there's an opportunity to doing other services, like interior design consulting or bring my cleaner around or give me diet coaching. Based on the things you might know about behavior in the home.

Chris Rill: Not in the near term. We're really focused on the security value proposition. What's going on in my home? Are my kids okay? Is my pet okay? Aging and place type monitoring. Make sure my parents are okay. In the future, maybe. But really, there's so much to do in the security field that I don't see us going into that in the near term.

Evan Nisselson: Who else? Okay, okay. You'll have more chances later. Does anybody have questions? One of the things they say at this point is that one of the biggest challenges of any conference is everybody wants to meet other people. I always feel that when nobody asks questions it's impossible to know who you are. But those that ask the smartest, most interesting questions will inspire those other people to go find you. Look at that! It always works, it's amazing! It's just a very simple statement that just connects.

Patrick Gill: Thank you very much. I'm Patrick Gill from Rambus. I'm a computational, optical research scientist. We're developing, sorry a little bit of shameless self-promotion. We're developing a new kind of sensor that will not produce photographic quality of monitoring wide-angle, sort of in visual space, but may be able to address some of the privacy questions you might have.

Evan Nisselson: So, what's the question?

Patrick Gill: The question is, would it be worthwhile for companies like you to have some kind of sensor that's of intermediate resolution? Much more resolution than a PIR sensor for instance and yet would not be able to read the text if documents you're working on or be able to identify people by face. Is that a pressing concern that you folks are seeing from the industry of providing privacy against facial recognition data and snooping in case your devices are hacked?

Evan Nisselson: Gaile, Chris?

Chris Rill: Yes, we should talk. You know, there are places that cameras just aren't meant to go right now. Like bathrooms and other places where there's an expectation of privacy. Different types of sensors perhaps that fits in the obligation you have to create private environments in your home, for instance. That's something that I'm really excited about, because there are places our current Canary cannot go. We want to make sure that we're able to monitor all environments, but do so responsibly.

Gaile Gordon: I would agree and I think that beyond, and I don't know about your product, but thermal data, 3D data, there's a lot of types of data that can be used to understand occupancy and what's going on in a room that are not really invasive of your privacy.

Evan Nisselson: So, Jan, tell us, you mentioned in emails with me one of the aspects that you guys are working on is gesture and that's a big project.

Jan Kautz: Right.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Evan Nisselson: Give us some use cases of that. What you're working on that you can share and how does it relate to this topic?

Jan Kautz: Right. Gesture recognition is something we've been working on for awhile now. The first use case was in car user interfaces. If you buy a BMW 7 series it already has a gesture recognition interface.

Evan Nisselson: What does it do, what does it-

Jan Kautz: It's fairly limited. You can take a phone call. I think stop playing music. Things like that. There's only five or six different-

Evan Nisselson: So what do you do to take the phone call? You point to it?

Jan Kautz: I think you make a specific gesture. I forget what it was.

Evan Nisselson: Okay.

Jan Kautz: You point to it. There's sort of a cube in space where you gesture and it recognizes whatever gesture you did, but there's only five or six different-

Evan Nisselson: So those people I see doing that are not crazy? They're talking to their car?

Jan Kautz: They are gesturing to their car, yeah.

Evan Nisselson: Okay.

Jan Kautz: We have a system that allows more gestures and you can add new gestures on the fly and it's more reliable than the BMW one. The challenge here is that for user interfaces you really don't want latency. When you press a button you want immediate feedback, you don't want to wait half a second. The same for gestures. If you make a gesture and it takes half a second for anything to happen, you never know. Did it work? Did it not work? So you want immediate feedback. That's actually quite tricky for this type of interface. Because you don't know if the gesture's ending, has it ended, is it still ongoing? So you need smart algorithms to be able to very quickly, rule out any latency. That's something we have. It's actually related to action and activity recognition videos, the similar problems the gesture is slightly easier because it's more specific. So you know, yeah. There's only 20 gestures so it's easier to know what they will be.

Evan Nisselson: Got it. Any more questions from the audience and also questions you guys have for each other? You have three different opinions here that you can ask tough questions and we'll see if we get answers. Go.

Audience Question 2: Hi. You guys have all touched on the security industry in many ways. Do you see any trends that are going to get the security industry to use extra sensory data? Stereo cameras, range finders. Flir, I guess, is doing some infrared stuff, but is there any hope on the horizon for the very traditional security camera, mono camera?

Gaile Gordon: Maybe to just jump in with a quick ... There's security and then there's a lot of other applications which are very related. Retail, for instance. The retail environment has always been very interested in slightly different data compared to security. For instance, they want to know whether there's adults or children in the space. Which is a classic use of 3D sensors. So it's something required since you happened to mention 3D.

Audience Question 3: I've been working on a project called the Human Face of Big Data for the past couple years and one of the things we've found in the course of doing this project was that General Electric and GE were working on a series of products aimed at aging at home. I'm curious if the Canary ... One of them is called the Magic Carpet. You install a carpet in the home of your loved one and it actually builds, sort of a model, of what's normal for your own parent and then predicts that your parent may fall just from muscle weakness or a change in their base behavior. These devices that you always see on TV, you know, I've fallen and I can't get up. They have this sort of shame factor that nobody wants to wear them, but now there's this gamification of health. Are you integrating with any of the Jawbone or Apple watches or any of these other devices that are not your own stand alone device?

Chris Rill: We're not, but I do think that's an opportunity to help those that are aging in place. Twenty percent of our, sorry, different stat, about a third of our customers are actually 50 and over. They will start to age in place as they get older and it's an opportunity for Canary and other companies like Canary to provide services and technology to kind of monitor those folks that are getting older and may ultimately start living alone and need the assistance of people or technology to be more independent.

Evan Nisselson: Okay, great. We're just about out of time, but two quick questions that if you guys can answer in one sentence answers. What are you most excited about in this internet of eyes and/or visual sensor sector that's going to happen in 15 years or 10 years? Or sometime in the future that says, "Wow, I can't wait for-."

Gaile Gordon: My favorite would be augmented memory.

Evan Nisselson: Which is in one sentence?

Gaile Gordon: Which is, where did I meet this person last?

Evan Nisselson: Okay, perfect. Jan.

Jan Kautz: For me it's a confluent of three areas, which is computer vision, machine learning and robotics. In 10 years or in 15 years we'll see a lot of new, very interesting robots that have capabilities that we've never dreamt of.

Evan Nisselson: Like what? In one sentence.

Jan Kautz: People helping in your home. Like a butler.

Evan Nisselson: A butler. It's going to happen. Okay.

The conscious home is definitely achievable in the next 15 years...your home will be smart enough to really understand what you want or need

-Chris Rill

Chris Rill: I think the conscious home is definitely achievable in the next 15 years and I think cameras will help allow the computers to get the context that they need of what's going on. But it will be the combination of all these different signals that are coming in that will provide all the context. Hopefully your home will be smart enough to really understand what you want or need. Because today that's just not the case.

Evan Nisselson: Okay. So that last question before we go to network and there's a lot of topics I'm sure people will hunt you down and talk in more detail. Each of you, there's a lot of smart people in the audience that want to start businesses. What would you suggest they focus on that would be great for the industry? Separate from what you guys are focusing on. What's another thing that leveraging visual sensors that you can't wait for someone to start working on? Go, Chris.

Chris Rill: Well, I would say the advice to anyone looking to go into entrepreneurship is be very self aware of what you're not good at. You're not going to be able to do it alone. You're going to have to find partners that are exceptional at the things that you're not good at.

Evan Nisselson: Great. Jan.

Jan Kautz: I've thought about this for awhile. I couldn't come up with a good answer. I think I'm a researcher at heart, so for me, it's hardest to tell people what businesses they should start.

Evan Nisselson: Or even from your space. What should someone work on as research?

Jan Kautz: I think pick hard and interesting problems.

Evan Nisselson: What's the difference from hard?

Jan Kautz: Something people haven't solved for a long time. Computer vision was one of those problems, which now finally it's starting to work because of machine learning. I think pick hard and difficult problems.

Evan Nisselson: Gaile.

Gaile Gordon: I think taking the systems, full systems approach is the answer to success. You need to have something that works top to bottom and was made to work together.

Evan Nisselson: Fantastic. Round of applause for this panel. Thank you very much.

The annual LDV Vision Summit will be occurring on May 24-25, 2017 at the SVA Theatre in New York, NY.