In 2003 They Called Evan Crazy - He Said Phones Would Replace Point & Shoot Cameras

Evan has been working in the visual tech industry for over 20 years. In 2003 he wrote that camera phones will be taking the place of point-and-shoot cameras and everyone thought he was crazy.

Learn how visual technologies will change business and society in the future at our 2017 Annual LDV Vision Summit. Early bird tickets on sale until April 30.

Why Will Wireless Camera Phones Revolutionize the Photography Industry?

Original story posted in The Digital Journalist in May 2003 by Evan Nisselson. 

The digital screen in front of me says that it’s 3:32 AM London time; I am 38,000 ft above Greenland on my taxi between NYC and Milan. My laptop is plugged into the airline power system, I am about to order another whiskey, and I wish they had free Wi-Fi Internet access on the airplane like yesterday outside in Bryant Park, NYC.

For many years I carried a Nikon F around every day so I was ready to capture the next ‘Love the Living of Life’ moment to be able to share with others. However, in 1999 I stopped carrying my camera because it was additional weight in my bag. It wasn’t being used often enough to justify carrying it along with the laptop, PDA, cell phone, and associated cords.

A packed bag is the norm for working photographers but my reality was not for working in the field but rather trying to develop new digital imaging solutions. It wasn’t a lack of interest to make pictures but my mind was focused elsewhere.

However, every couple of weeks I would get frustrated that I missed a moment that I would have normally captured with my camera and communicated with others.

Now, the problem is solved, I have a cell phone camera.

Photography means many things to many people at many times. It’s a means of communicating at its core. People use photos to visually communicate with others about a vacation, a bike ride, a news event, a celebrity, or about your “totaled” car to the insurance company. The process of visually communicating is in for a drastic shift due to the arrival of cell phone cameras.

Professional photographers and consumers around the world have finally started to realize the benefits of making pictures digitally but it’s not going to compare to how wireless photography will revolutionize how people make, share, sell and communicate with pictures.

Nowadays, people around the world don’t leave their homes without their designer cell phones unlike with cameras. Professional photographers usually carry their cameras around everyday but there are times even they leave the house without their camera – but all take their cellular phones.

I had been waiting to get my first cell phone camera for years. Professional photographers say they are overwhelmed with the tools of their trade which need to be totted around like; a laptop, PDA, 2-4 camera bodies, lenses, flashes, film, batteries, smart cards, power cords, adaptors, satellite phones, back up hard drives, etc.

Carrying a combined cellular phone and camera is a totally different mindset than meandering around making pictures with a typical camera.

Even after buying my new cell camera, I continued to get frustrated that I was missing moments worth capturing because I didn’t have my camera and then at the last possible second - I would realize that I could grab my cell camera and capture the moment. Now not only could I capture the moment but I can also instantly share that moment with someone around the globe by sending it as a MMS within seconds. Once again I am ready to make pictures wherever I go as I used to do with my analog camera.

On the way to the office the other day, I was making pictures with my cell camera. I came across a war protest that was about to happen… I could have had the scoop but was late for a meeting so I couldn’t wait around. It will definitely be easier for the masses to capture news events when photographers are not present but the quality and legitimacy of their photos will always be an issue.

There are many people that I frequently brainstorm with about the present and future of digital imaging for consumers, professionals and businesses. Below are some of my more recent conversations that add perspective to how cell phone camera devices will revolutionize the photography industry.

Bob Goldstein and I were talking the other day on the phone between Milan and Los Angeles about his research on how digital imaging is revolutionizing visual communications:

One of the things that we’re saying is once you have something like [a cell camera] that’s with you all the time – you’ll end up reaching for it, instead of a pencil or a pen to write something down, instead of reaching for a keyboard to type something out.

Oh, absolutely. I’ve already done it twice. I’ve told you before. I definitely think the cell phone camera is going to revolutionize digital imaging for the consumer.

I think for everybody.

Well, yeah, you’re right, also for professional photographers and business folks. But when I think of the photographers out there, I think of the professional photojournalist, and then I think of the consumers making pictures. But you’re right. As far as visual communications, the business opportunities are tremendous. So I got back to the office today and I was in a meeting, and there was a blackboard or a whiteboard all scribbled. And I said, well, we can’t erase this because I don’t know if the person has copied it down. So I made a photo of it.

There you go....

… Saturday, when I was walking around the center near the Duomo in Milan, there were, I thought, a lot of people, seven to ten people that were doing the same thing with their cell phones that I was doing. And they all had three or four people looking around it after making pictures for instant gratification. So I downloaded all my camera photos, like a hundred of them, or eighty of them, and put them on my computer, but I kept the three that I really liked on the cell and shared them in the office on the phone. Everyone loved sharing those photos digitally. So the quality is horrible, but the concept that it will be at one megapixel and two megapixel and maybe even three by the end of this year proves the fact that the quality is just a matter of time. And the user experience is the big question – well, how does it do it? I was trying to figure out if people noticed me making pictures or was I a spy. Italians, I think, already have an idea that these cameras exist and they kind of don’t see it as awkward.

But they’re seeing it as a regular camera at that point.

Absolutely. Because the sole reason that anybody makes a picture, whether a professional, consumer or business is to communicate. That’s the sole reason; to communicate, share that communication, save a memory, and document history, which is just another communication for a later date.

And I’ll tell you something. When you said the quality was horrible, I’ve got to tell you – and part of it is I’m used to the palm-cam and all the rest of it – but in terms of you sending me a little what we would call a modern snapshot, several of the pictures, especially the first one that I looked at of the guy and the dog, I mean, the quality was completely acceptable considering, first of all, that it’s first generation, but also considering what it is. You’re sending me your impression of a moment on the street. And that was completely transmitted to me. I didn’t look at it and wonder what it was; I looked at it and went, wow.

Thanks. Yeah, yeah. No, I’ve showed it to many people, and they’re like, all the other ones are cute, but that’s a great photo…once I saw it, I knew I nailed it.

Right. So even with all the limitations, that still, that first impression, because you wielded it so well- I’ll tell you what I think also is happening, especially – I mean, this has been going on for a long time, but I think it’s really going on big time now – is in terms of quality expectations. We’re looking at hours of footage on TV now that looks like somebody puked on the lens and is underwater. (Exactly.) And so people are grateful for any kind of image. And again, in looking at yours [photo], I thought the quality was – considering that it’s still a, what, a 640-by-480 image – was astoundingly good. And as you say, rightfully so, within the year we’re going to have one megapixel cameras and up.


I am often talking about technology before it becomes commonplace and that is either a curse, blessing, insight or maybe all of the above. This time it’s no joke – wireless photography and more importantly, cell phone cameras are going to revolutionize how consumers and professional photographers make, share, distribute, sell and communicate with photos.

It has been fantastic to experience the transition from analog to digital photography in the last 10 years. In 1993 I was transmitting digital photos to SABA’s agents around the world with an ISDN line, 24 Hours in Cyberspace in ’95, then I created the first Internet broadband photography portal in ’97, today it’s cell cameras and tomorrow it will probably be photo blogs, personalized digital distribution and new marketing solutions for photographers - but that’ll have to wait until future articles.

The other day I was discussing with David Friend via email how cell phones with digital cameras will revolutionize photography and he wrote the following, which I totally agree with:

“What ARE the myriad applications? Sales people in the field connecting to the home office; photography scouts scouting shoot locations; many things Polaroid’s are used for now; grandkids/kids connecting with faraway parents/grandparents; disasters & breaking news events shot by local citizenry, EMS workers, etc....I think it's just a few steps away from Dick Tracy wrist-video phones.”

There are many examples of how businesses are using wireless photography to do their job better, faster and to save money. Insurance companies are sending people into the field to make and instantly transmit accident photos to corporate headquarters. The other day someone told me a story of a copier mechanic talking on a walky talkie to his office because he couldn’t figure out which plug he should adjust – they kept on going back and forth but the descriptions were not accurate enough. All of a sudden someone offered their cell camera to make a picture of the machine, asked for the office email address, emailed the photos from the cell and within moments the office said ‘WOW’ adjust that dirty red cord next to the blue cord to fix the copier.

The image quality of these cell cameras is 640 X 480 at best but we should have three megapixel cell cameras on the market within 18 months. Anyone who is complaining about the quality of today’s cameras are not focusing on the critical technological and cultural advances that are knocking at our door. Quality is a minor issue today and will be solved in time, as professional digital cameras are now good enough for publishing high quality books.

Additionally, most consumers tell endless stories and share tons of laughter from photos that are barely legible. These cell cameras will not replace professional cameras but they will be another tool just a like a web site, a wide-angle lens or analog film.

Can you tell which of the following photo strips were made with my cell camera?

I was trading emails with a photography friend and he was a bit outraged with my email that the BBC was asking their online audience to submit photos from cell phones, digital and analog cameras from Iraq war protests around the world. I thought this was a fantastic way for the BBC to develop a more global interactive online community but he took the following different perspective:

Now, now--it's quite rosy from your perspective but the trend amongst the media is transparent; cut costs and maximize profits, fatten up for our next merger, nothing else matters. I'm rather surprised that you don't recognize the BBC link as an exploration of potential free content.

In the future, I believe I will need to convince the editors and AD’s that they need something more than a chimp with a cellular camera, or I'm doomed to compete against the pizza-faced kids supplementing their part-time jobs at McDonalds! That kind of photo--or snapshot, whatever, from a cellular camera--will be so commonplace that there will be no budget for it.

I agree that the BBC probably had different agendas from saving money to interactive programming but I strongly believe that photographers shouldn’t feel threatened by consumer photographers because the creative eye and skills of most professional photographers are far superior. Publishers will always need them to succeed. 
Professional photographers already compete with the public when it comes to photographing news events because publishers often publish public news photos. Cell cameras might make it easier for the public to make news worthy pictures.

I finally arrived in Heathrow at 9AM only to have a 5-hour lay over before my flight to Milan but thank goodness for the business center so that I could re-connect my veins to an Internet IV.

I just had an interesting chat via instant messenger with Damon Kiesow who is a Sr. Photo Editor working the early morning News shift today at America Online in Virginia.

Cell cameras will revolutionize the photography industry because they can instantly share photos from your camera to people around the world! The other day, a friend said that she had hundreds of photos that no one has seen because she is too busy with two young kids to even think about printing, uploading, and emailing the photos. The reality is that digital photography hasn’t made it easier to visually communicate, yet... The photography industry is in the first inning of the first game of a long competitive season.

Many industry analysts are forecasting that camera phones will outsell "standalone" digital cameras in the next couple of years!

Cell cameras will only successfully revolutionize the photography industry if it is simple to make and share photos with these new devices. Additionally, today’s pricing models with the wireless companies have to become cheaper to get everyone addicted like I am. It is so much fun to make pictures with my cell phone and then instantly share them with others around the world.

If the above is not enough to convince you that we are on the precipice of another major transition in the photography industry then the following two stories might help.

After arriving in Milan, I picked up my cell phone camera that I wasn’t able to use in the states because of Italian pre-paid service issues. Within one hour of using my cell camera again in Italy, I had sent 5 new photos as mms messages to friends around the world showing that I was back in Italy. Cell cameras are addictive!

I was at the Blue Note Jazz Hall in Milan and I wanted to share the moment with a friend in California who is a Jazz musician. I made a photo with my cell camera, recorded a couple seconds of audio from the show and then sent the photo, audio and a short text as a MMS minutes later.

Everyone at my table and the neighboring table were amazed, jealous and asked in multiple languages what cell phone camera they should buy! In conclusion, carpe diem and figure out how you should leverage wireless camera devices in order to either visually communicate easier and/or increase revenue for your photography business.

[All photos made with the Sony P800 Cell Camera except one photo strip]

An Image is Really Hundreds of Data Points That Tell Us Who We Are

Anastasia Leng, CEO of Picasso Labs © Robert Wright/LDV Vision Summit

Anastasia Leng, CEO of Picasso Labs © Robert Wright/LDV Vision Summit

Join us at the next annual LDV Vision Summit in NYC. Early bird tickets are on sale until April 30.

This is transcript of the keynote by Anastasia Leng, CEO of Picasso Labs, from our 2016 LDV Vision Summit.

Thank you. Hi, everyone. My name is Anastasia Leng and I am a former Googler using technology to measure creativity. Now, before you decide if that statement alone makes you love me or hate me, let me tell you a little about how we're putting science into something that has traditionally been an art.

As many of you know, human brains process visual information sixty thousand times faster than we process text. But the result of that speed is that we walk away with a very subjective understanding of an image. I like this image, or I don't, this is a good image or a bad image. We've all probably sat in a meeting discussing a photo with someone who is more senior than us who says, I don't know why, but I just don't like this image, and there's nothing we can do about it.

Technology is very different; technology looks at an image objectively. It gives you the most comprehensive set of metadata contained within an image. With that data, thanks to image recognition technology, we've done things like build visual search, we've done things like build content recommendation systems.

What if we could use this data to understand how your user's reactions change based on the content of your image? What if we could use this data to better understand how their perception and their interaction with your brand changes, and the why behind image performance?

What if we could use this data to measure and optimize your visual voice at scale. Now, this in itself is not really a new concept. Psychologists, have been toying around with this for years. They've been looking at how visual stimuli change users behavior, perceptions, and ultimately their reactions to a brand or a different environment.

For example, in the late 1990's, the government of Scotland decided that they wanted to reduce crime rate, especially at night. At the same time, the government of Japan wanted to reduce suicide rates in train stations where people were jumping in front of the tracks. They installed blue lighting, which is meant to have a calming effect. They saw a 9% reduction in crime. 

Pharmaceutical companies, big pharma, has been accused of using color psychology to influence their trials thereby heightening placebo effect. Consumers, or their trial participants, rate red pills as having more of a stimulant effect and blue pills as having more of a depressant effect. Now this varies by gender and this varies by country, it is very culture specific, but there is some reaction that's real, we react to visual stimuli very differently. As a fun fact, one of the common complaints about Viagra in the U.S. is that the pill is blue and it doesn't match up with the reaction that consumers expect it to have.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Now brands have known this and they've used this anecdotally one-off. If you open your phone now and look at any of the food apps on your phone chances are they will be red, or orange; that is not an accident; porn's the same way, it's not an accident - it is because they want you to be impulsive. If you look on the left side you'll see a bunch of brands that are blue. Those brands want you to give them your money or your data and what they're saying is, we are safe, we are trustworthy. If you're going for global domination, this rainbow effect here in the bottom seems to be the lucky ticker.

But it really is about so much more than just color and to illustrate this I want to tell you guys about a personal anecdote. So, I'm a freak about EV testing, which has bled a bit into my personal life, and when I was fundraising money last August and pitching my first clients, I started testing it out, experimenting with how the way I looked impacted my conversion rate at a meeting.

I started to look at whether investors' or clients' reactions changed based on whether I wore my glasses or not. This is nowhere near statistically significant, right? But the reality is, I now wear glasses to every investor meeting because I saw that my conversion rate was higher. While this is nowhere near statistically significant, what it does tell you and what we all fundamentally believe is the way we look, the things we wear, impacts the way people react to us.

If I was trying to optimize for conversion rate and dating, it'd be the other way around. The question is, and actually this is probably very context dependent, but if this is the reality, why wouldn't this be the same thing for brands?

Brands spend billions of dollars creating millions of images. Those images contain trillions of data points about consumer's actual revealed preferences about the visual content within those images. But brands have no idea how to harness this data.

Brands spend billions of dollars creating millions of images. Those images contain trillions of data points about consumer's actual revealed preferences about the visual content within those images. But brands have no idea how to harness this data. And this is what Picasso Labs does, we give you very specific performance insights to help you understand the “why” behind image performance and help you better understand who your audience is and how different parts of your audience respond to different visual content.

Now our technology is never going to tell you something like always use red on Instagram or you know, blondes are always better in your display campaigns; what we believe is audience react differently to different visuals based on who is the brand behind them so we do very personalized image recognition and machine learning to understand what is about a specific audience reaction to your brand that causes an impact in behavior.

As a result of the way we work, a lot of the insights we gather we can't really talk about because they are seen as competitive advantage and very proprietary to the brand but we have been working with a number of luxury fashion companies who've let us expose a little bit of the data. So a few months ago, we were working around fashion week with companies who wanted to understand what type of image style worked best. This is luxury fashion companies on Instagram. What they were measuring was increase in engagement. Now engagement on Instagram is likes, comments, etc. I'm gonna let you guys guess and I've started you off with an easy one, which image style, for luxury fashion brands right, I'm talking Chanel, Louis Vuitton, Prada, etc., gets the highest engagement on Instagram? Raise your hand if you think it's runway. Okay, a couple of hands. Raise your hand if you think it's editorial, right? What about street style? Yes, okay, told you guys this was an easy one, Street Style is absolutely right.

Now the fascinating thing here is we all knew that right, not all, most of us, most of us knew the answer here, maybe we were lucky, maybe it was an intuition. Around fashion week, most luxury fashion brands that were monitored, their engagement rates dropped. Part of that is the saturation problem, part of that we saw that even outside fashion week cycles, any runway content just seems to really drag your performance down and we've analyzed this by looking at millions of images across a bunch of fashion brands.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Now the second one, what type of model shot works best? So raise your hand if partial body face visible you think is going to be the winner here? Okay, what about partial body no face? Okay. Full body? Okay, so I think full body's got it but no one seems quite sure, this one's a bit harder and the results were really surprising. No face wins, in fact if there is anything, any takeaway that we've seen across most luxury fashion brands is cut off the face right? Which is crazy if you think about the amount of money people spend hiring models strictly on the attractiveness of their face and actually you consumers don't want to see it.

So, that's Picasso Labs, our mission is to foster creativity through data, we really believe in giving you the kind of data that helps you make smarter creative decisions so that next time you're in a meeting with someone who says "I just don't like it" you could say "Well I don't care because I have data to show that our users do".

Thanks very much.

Sturfee Launching Augmented Reality Gaming App with Investor They Met at LDV Vision Summit

Anil Cheriyadat, Founder and CEO of Sturfee © Robert Wright/LDV Vision Summit

Anil Cheriyadat, Founder and CEO of Sturfee © Robert Wright/LDV Vision Summit

Anil Cheriyadat, Founder and CEO of Sturfee was a finalist in the ECVC at the 2015 annual LDV Vision Summit. Sturfee technology use mobile cameras to recognize the real world around you then augment it for travel, gaming, entertainment etc. We caught up with Anil as Sturfee prepares to launch its Augmented Reality social application...

How have you advanced since the last LDV Vision Summit?
Since the Vision Summit we have raised US$745K as part of our initial seed round. We have two Silicon Valley investing firms along with other notable angels that took part in the funding round. We are now in the process of raising the remaining part of the seed round (US$800K). We have also expanded our team to 7 and are looking for more engineers in the areas of deep learning, geometrical computer vision, and GPU programming. 

What are the 2-3 key steps you have taken to achieve that advancement?
Turning cameras into novel interfaces through which we can transform live streets for travel, gaming, and enterprise applications has disruptive potential. The problem was quite interesting from AR, AI, and Robotics perspective. The approach we took was unique. We studied the problem well and put our solution through different conditions. Being the team of engineers who have worked closely before helped us to advance quickly.

Our move to San Francisco in 2015 was also key, (before that we were at Oak Ridge National Laboratory in TN), it allowed us to be close the “mothership.”

What project(s)/work is your focus right now?
Two things are in line right now, (i) we are laser focused on bringing the technology as a social consumer product for a focused group. Technology can convert physical places into entertainment zones - shoot a 10 sec video of Empire State building, place a basketball hoop on the building, and challenge all your friends to beat you! (ii) With our technology we are addressing the fundamental AR problem – estimating 3D measurements through camera lens. Existing solutions were designed to operate effectively on indoors. But for outdoors the 3D measurements can be estimated from other vantage points – even from space, this is key. We are focused on advancing our street vision engine that fuses image data captured from diverse perspectives but converging to a location.

What is your proudest accomplishment over the last year?
Biggest achievement is putting together the team. We knew that the solution we have developed for solving the AR problem is unique. It required people from diverse background. Currently our team consists of members with PhD in satellite image analysis, PhD in motion analysis, Game designer, GPU engineer, Golang stack developer, Game mechanic and iOS programmer. It’s the whole package that makes it work!

Anil Cheriyadat, Founder and CEO of Sturfee © Robert Wright/LDV Vision Summit

Anil Cheriyadat, Founder and CEO of Sturfee © Robert Wright/LDV Vision Summit

What was a key challenge you had to overcome to accomplish that? How did you overcome it?
Building the team when you don’t have money is not easy. But at Sturfee we were able to bring people together at early stages. The key to this achievement was clearly defining the problem we are solving and illustrating the vast disruptive potential ranging from travel, wearable, and robotics.

Dr. Harini Sridharan, who is now the CTO and Co-Founder joined Sturfee at the early days. We have been working on this for a long while now. The first IEEE workshop on “Computer Vision for Converging Perspectives” that I co-chaired as part of the 2013 International Conference on Computer Vision was the starting point. As the team gained more understanding of the solution it became clear to everyone that we are onto something. This was the key motivation that is pushing us forward.

What are you looking to achieve in 2017?
With our first product we plan to give people the power to generate new form of AR pictures and videos – imagine throwing a digital ball into a live or street scene and it bounces around, hits the incoming car, and files off. Streets are now game scenes! Every user with a phone can convert live streets into game arenas. We will empower people to turn streets into magic zones. We are aware that transitioning from technology to product is not an easy step. We have been preparing for this since Jan 2015.

Did our LDV Vision Summit help you? If yes, how?
The LDV Vision Summit helped us to connect with people who are really good in the computer vision business.  The meetings and discussions we started off at the summit eventually resulted in angel investment.

At traditional IEEE conferences you might find groups focused on the technical areas. At LDV you have a balance of technical and business experts in the audience to network, brainstorm, recruit and many investors to speak with.

What was the most valuable aspect of competing in the ECVC for you?
Feedback from the judges was valuable. Again, it was from people who understood computer vision as/for a “business” solution.

What recommendation(s) would you make to teams submitting their projects to the ECVC?
Startups in computer vision should definitely apply to ECVC. You will definitely meet interesting folks or companies who have been at the summit before but now advancing to the later stages. That will motivate you.

What is your favorite Computer Vision blog/website to stay up-to-date on developments in the sector?
For interesting vision stuff, I read Kaptur, TechCrunch, Tombone’s computer vision blog (written by Tomas Malisiewicz).

Join us at our next annual LDV Vision Summit on May 24-25 in NYC.  Early bird tickets available until April 15.


Josh Elman Says Building a Company is Hypothesis Testing

Josh Elman, Partner at Greylock Partners © Robert Wright/LDV Vision Summit

Josh Elman, Partner at Greylock Partners © Robert Wright/LDV Vision Summit

Join us at our next annual LDV Vision Summit on May 24-25 in NYC.  Early bird tickets available until April 15. This fireside chat with Josh Elman of Greylock Partners and Evan Nisselson of LDV Capital took place at the 2016 LDV Vision Summit.

Evan: I'm honored to bring up our next guest, Josh Elman from Greylock Partners - come on up. Hey Josh.

Josh: Hey Evan, thank you.

Evan: Good afternoon.  People are going to make pictures, make videos and there's a couple 360 cameras out there. We don't know what is going to happen.  

Josh: Do you have drones flying around?

Evan: Yes. They're very silent drones. We talked to Bijan yesterday about Lily Robotics, which is the flying camera and all this kind of stuff. In your own words, what is Greylock's focus, size, stage, and what you're most focused on.

Josh: Greylock as a firm has been around 50 years. We're one of the oldest venture capital firms and we try to look for the same things. In companies we look for ourselves building and companies that are building enduring value that can really lay a foundation to be 50 year or more companies, and really build platforms of the future. We really get excited about companies that understand how to build great network effects. Whether they're consumer networks, where lots of people come together. Whether they're marketplaces, where lots of transactions happen. Whether they're data networks, where the company amasses so much data that creates a real advantage to create and sell more and more products, and we see that often.

We focus on consumer platforms. In the past we've involved in Facebook, LinkedIn, AirBnB, Pandora. On the enterprise side we've been involved in companies like Workday and Palo Alto networks, and AppDynamics. We're a billion dollar fund, we focus mostly on Series A and Series B investments, writing checks anywhere from $7 million up to $25 million or so, in a Series B. Or we take real board positions, we really roll up our sleeves and help.

All of my partners have worked at companies that have become very big, iconic companies in the past and gotten very large themselves, and have been very hands on, usually in product management or founder type roles, where they really are driving the actual strategy of the company. We really take a hands on, helpful position trying to help those companies become great and find their way through.

Evan: You worked at some bigger companies and really always focused in product and growth aspects of those companies, like Twitter, like LinkedIn, like Facebook. Tell us a little bit about the differences of those three, as an example, because most people here don't know what happens inside. What's the one nugget of difference between those three that you experienced as an employee there?

Josh: It's funny that you call them bigger companies because when I joined they were a lot smaller.

Evan: What size were they? What number were you there?

Josh: LinkedIn was about 15 people when I joined. Twitter was about 80 people when I joined. Facebook was about 500 when I joined.

Evan: I remember talking to my friend Constantine who was early at LinkedIn, we were having coffee and talking about our different startups, "Well, is yours going to work?," "I don't know. Is yours going to work?,” “I don't know." Fortunately LinkedIn was one of the ones that worked.

Josh: They figured out a lot. I'll start with the quick similarity between all three companies, LinkedIn, Facebook, and Twitter. We grew up in the era, LinkedIn was founded in 2002 ... Launched in 2003. Facebook 2004. Twitter 2007. This was an era when Google was the big dominate company. Everybody looked at Google as this massive, brilliant technology company that can make great technology that does everything important.

At Facebook, LinkedIn, and Twitter the whole thing was, we definitely thought of ourselves as technology companies, we're building new products and everything else, but in some ways we thought of ourselves not as technology companies. We were really just, we were more like human psychology companies, giving people new tools to connect. I almost joke, but we were just putting up forms that people filled out, then we took the information they submitted in the form and shared it with more people. There wasn't a lot of technology in the way that Google kept talking about technology back then.

That was sort of this amazing contrast, it was like they understand technology, but they don't understand people. We were going to build products that deeply understand people - that get the language right, that get the words right, that get the images, the faces, the right content so that when you're experiencing this product, you're experiencing the much more emotional human way, instead of just think you're talking to like a black box that's masterful in technology.

At Twitter, LinkedIn, and Twitter...we were more like human psychology companies, giving people new tools to connect.

-Josh Elman

Evan: As you were there, you threw things up, forms and content, and images here, but how did you know when a couple of them worked and to double down and triple down on those features? Even on the basis of a low level you put ten things up in the span of a month or two, and you said, "That one's it," because they're all trying to work on companies in the audience, or maybe half the folks in the audience.

Josh: We spent a ton of time looking at our data and really kind-of understanding. At LinkedIn we spend a lot of time on virality, at Twitter we spent a lot of time on retention, really understanding what was the overall conversion rates, was there retention behavior. The thing that I always love to remind people, I think is really important is, as I like to say, "Data is the plural of anecdote." You really can just be looking at all this data and talk about conversion rates and everything else, but you have to get to the meat of the underlying stories. You have to actually talk to users. We would talk to users all the time and say, "Why did you sign up? What made you want to do this? Why didn't you want to sign up?"

At LinkedIn we have a lot of people who didn't want to sign up because they thought LinkedIn was a job site and they didn't want to put their resume online. We ended up building up a whole jobs product so we could sort of say, "Hey we do have a job site too. It's a little bit of a business, but if you don't enter the jobs area, LinkedIn's still really valuable for you as a professional." The other thing we spent a lot of time was on language.

The whole way LinkedIn grew was I was send an invite to Evan, "Hey please come join my network on LinkedIn," and we'd have a bunch of language. If we had language that said like, "It will make both of our networks bigger." Then when Evan signed up for LinkedIn, he was more likely to invite more people. We were able to see a secondary viral effect that was even better just by tweaking the language that we put on the tactfully invitation.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Evan: How were you tracking though, because that was still earlier days, and still companies are struggling with how to track the actual 'one word is better than the next.' Were you guys hacking together stuff behind the scenes or was there actually a whole analytics platform, or flying blind with the blindfold on?

Josh: This was 2004. There were none of these analytics platforms or anything like it. We had a data warehouse, so every night the entire LinkedIn database was cloned into a data warehouse. Then the next day I could go in and tinker around the data warehouse-

Evan: What kind of tinkering? What kind of coding? What were you doing?

Josh: I was writing a bunch of SQL queries that were basically selecting the group that received invite A, versus invite B. Because we actually logged the entire chain - if I sent an email to an email address, I could then track what text they got, and then when they came and signed up - I could track their conversion rate and their behavior. I could join all these things.

I could actually test from the original invitation, how many people actually signed up and how many people then sent more invites. I could start to then go the second degree too. It started to get too complicated for my level of SQL knowledge. Now we have much more robust virality platforms. We were having to scratch and claw and make it up as went back then.

Evan: Before you joined there, did you know anything about SQL or did you think it was a kind of food from Europe? What's the deal? Did you just quickly teach yourself one day and say, "Hey we need this. We need this, and we don't have resources," and you did it, or what?

Josh: My first job was as a programmer. I worked on a product called RealPlayer that some of you might remember. Trust me, when I meet 22 year old founders these days and I say RealPlayer it's a total black stare.

Evan: I mention dial up waiting for an image for half an hour and they're like, "What?"

Josh: Yeah. Re-buffering video. I'd been a Windows programmer that was building the RealPlayer client and running the team doing that. I had a lot of coding experience, but I hadn't done much database work. Given that I had already been writing code and shipping software, picking up SQL to do the basic queries I was doing, was fine. As I said, I did kind of reach my limits and every once in awhile I would write an inner joint that they would come and yell at me for, for taking down the data warehouse.

Evan: We heard from Jack Levin yesterday talking about the first infrastructure employed at Google, and would press a couple buttons and the whole system would go down, and you had to take a bike to the office and go pull off the plugs and put them back in. So times have changed.

Josh: Yeah times have changed.

Evan: What did you learn ... First, it's easy to talk about the good things, sometimes we learn from the good things, sometimes we learn from the bad things. What's a mistake you really screwed up that you learned the most from at one of those early companies that surprised you but you learned the most from?

Josh: That's a great questions. One of the biggest mistakes in, and just one of the great learnings. I like to think of working these companies as you're constantly making experiments. Everything you do is an experiments. If you basically say, "We have a thesis and it's either going to work or not, and we're going to build this thing and we'll see what happens." That way I try not to think of it as a mistake because we went in with the thesis and we tested our thesis. Rather than, "I know this is going to work," and then when it doesn't work, you're disappointed.

Evan: It's that psychology again. Did you study psychology?

Josh: I actually did.

Evan: I knew it. It's a perspective.

Josh: I did this great program at Stanford called Symbolic Systems...

Evan: Okay. Now that makes sense.

Josh: Linguistics, psychology, computer science and philosophy. At Twitter, when I joined Twitter it was 2009 and Twitter had already been on Oprah and a bunch of people had heard of Twitter. We had this massive problem where every signup didn't stick around. We did a bunch of analysis, interviewed and called a bunch of users who had done stuff. One of the big things in the company was, we have to change the Twitter homepage. If we just make Twitter easier to understand when you show up on then it will grow so much better.

This was just this common thread in the company. I think it still is today. "If we just change the homepage, everyone will do it." The first project that I got started on, we rebuilt our onboarding flow. That actually worked really well. I got some credibility in the company, and they were like, "Oh, we can change things and actually make progress-"

Evan: “Josh actually knows what he's doing-”

Josh: I made a good guess and it worked. Then they were like, "The next thing, let's go make that homepage." We were like, "Let's go rebuild and design the homepage. Let's make it fresh with like new tweets coming in that show what's happening live. We'll show trends. We'll show all this stuff to make the Twitter homepage much more dynamic so you'll get a taste of Twitter," and this was replacing a page that had a big search box.

If you don't know how to use Twitter and the first thing you do is type something into a search box to search Twitter, that's probably the absolute worst way to try to figure out what the heck is going on on Twitter. We did all these changes. We made this page. It took us several weeks because we had to build a new algorithm that would surface the top tweets and build some editorial things, for instance, that we could immediately take something out if something inappropriate happened to show up, even if it was there.

Then we shipped it, and we actually did an AB test. We're like, "Okay. We'll ship it to half the users." It's very tricky to do AB tests on logged out pages because you just have to cookie people, but they might come from a different computer, but “we’ll try to do our best analysis.” We shipped it and it made zero difference. Literally what we learned or what I assumed that we learned is everyone who showed up to Twitter had so many preconceived notions of what it was about, that actually just clicking to go sign up and hopefully we could teach them through the signup flow made much more of an impact than whatever we put on the homepage. It turned out as we kept doing more testing, just removing all options except for login and signup, actually performed mildly better than everything else-

Evan: Because you hooked them in.

Josh: Well, by not giving them any other ways to click and then get lost, and then go away. It just turned out literally by not adding any features to the page, but adding nothing was by far the best. For a while, again, I've been gone from the company five years and they keep testing this stuff, but for a long time the Twitter home page was basically like type in your email address and password if you're logging in, or first name, last name, and email address to signup. That was it. It turned out that everything else was distracting.

Evan: Regarding that philosophy and testing, that psychology of not everything's going to be perfect - keep on testing. How does that then transition to your mentality with companies that you've invested in early, and come to you and or have board meetings, "This is working, that's not working," and obviously there's some tension there. "What are we going to do? Where are we going to go? Is this going to happen or is it not?" How do you work with them with your deep knowledge on the product side to help the business get to where everybody wants to go?

Josh: It's funny. We think of investing very similar to this philosophy. We think of company building the same way. Reed Hoffman, who founded LinkedIn is now my partner at Greylock. I remember a conversation he and I had in 2004 where I was like, "Explain to me financing and how that works." He said, "Look, it's a series of hypothesis testing. What you do is you raise enough money to go challenge this set of assumptions, and try to see if they're true or not. If you mostly provide them true, we can build a product and people will use it a little bit, and maybe they will share it with friends, then you go test the next set of assumptions. Which is ‘can we build more features or more things that will help them use it more?’ Then you raise more money to go do that and if that works, then you turn it into a business. Then you raise a bunch more money to turn it into a business."

He says the entire process of building a company is a set of hypothesis testing. We very much keep that philosophy at Greylock. When we make an investment we're not like, "This better work. If this doesn't work we're going to fire you as a CEO and we're going to remove funding from your company." It just wouldn't work that way. It's like we're ready to join the journey and we're excited for the next hypothesis and we're giving you this much capital right now in order to go try to prove it. If we get towards the end of that amount of capital and we haven't proven enough of our assumptions we should rethink what we do with what you've built, with your own career, with where you're going.

If, by the way, we've proved them faster, we should go raise more money and get a lot more aggressive to keep going to prove them. It has been a couple of years now, and I've already gotten to see both sides of that journey. It's incredible. We just keep it as this testing. It's never failure. It's are we testing enough and at some point you may decide that with this cap table and this amount of money, and this set of people that you've hired on this journey, maybe that's enough to try something new and keep going, or maybe it's time to rethink things-

The entire process of building a company is a set of hypothesis testing. We very much like that philosophy at Greylock. When we make an's like we're ready to join the journey and we're excited for the next hypothesis and we're giving you this much capital right now in order to go try to prove it.

-Josh Elman

Evan: There's a hard question I want to ask and it's tough to pinpoint, but actionable. Some of the ones that are doing great, fantastic. They're doing their thing and they have the secret sauce, whatever it is. The ones that are not, are the harder ones on both sides. It's kind of like, if you failed once and you've learned - great, if you failed twice and you learned - okay, you might still have potential, but if you failed three, four, five, and six times it becomes a potentially negative trend.

If you do the hypothesis that doesn't work for four, or five, six times, is there coaching from your experience to them because you look at network effects that you've got experience in it. Give us an example of things that you said that you thought was going to work with them three, or four, or five times, and it either never did or that one thing came out of nowhere just from the story behind the scenes to give people a flavor of doubling down and tripling down. How many times can I fail before I give up?

Josh: Look, you guys might know I invest in a company called Houseparty (previously known as Meerkat) just over a year ago, I invested right before Southwest Southwest. I'd been doing live video since real networks and I believed they were just in a world where mobile live video was about to happen, where everyone in this room could whip out their phone and be broadcasting live video immediately.

I'd met Ben Rudman, who by the way, Houseparty was already the third product of that company because he had started it several years before and built a team in Israel that said, "We're going to build something really important for live video that we think is going to be amazing." I had gotten to know him and we'd share a lot of ideas on live video and when Houseparty kind of popped late February, early March of 2015, I was like, "Look Ben I'm really excited. This may be the moment in time where the phones and the network capacity can handle it, where we go do this." I had also known that Twitter had bought Periscope, a different live video company, and they hadn't announced that yet or launched it, but I knew about that and I said, "But I'm still going to go back you and this is going to be such a big pie that we're going to go figure out what our share of it is along with Twitter."

Fast forward four or five months, Houseparty was actually doing reasonably well versus Twitter. I think Twitter was ahead and Periscope was growing a little bit faster than Houseparty but we were holding our own. Then Facebook comes out and say, "We're going to do live video now within our network. Any body who's got a celebrity or an audience on Facebook can just go live and go to their whole audience." We were like, "Oh man. I know we can maybe compete against Twitter and that's hard. But competing against Twitter and Facebook, and really, just competing against Facebook really hard. Right?" Facebook was like, "Oh let's go live," and four billion people just show up immediately.

Evan: That's kind of challenging to deal with.

Josh: Live video is much more about the intimacy and the authentic nature. Part of the reason that we're all here is to have a much more authentic experience. When we're in the moment versus somebody can watch later. It's not just about numbers, but numbers do help. We learned that through that experience. We said, "Okay. This Houseparty thing, trying to compete against broadcast live video may not be the thing."

The team has really hunkered down and is working on something new again. We're seeing some really interesting, early positive stuff. What they're doing is much more around group social video. It's really exciting to kind of see they just keep taking cracks at it and the nice part was we all gave them enough capital to be in a great syndicate to join. Enough capital to really go try to prove the future of live video.

It wasn't just a bet that Houseparty must work or bust, it was a bet that Ben and the great team that he's built at the company, the company's actually called Life On Air, because it's about how to you live your life on air. That company is going to go produce something great. I'm still excited, but they also have a runway. At some point the capital that even we gave them might run out. If we don't have ourselves in a position where this company's worth a lot more capital, then we'll figure out what the right thing to do with the company is.

Evan: Cool. Relating to Houseparty, but also probably Jelly and several other companies that you've invested in - we started our pre-interview chat online about a couple weeks ago with a lot photographs of your face which related to the title, because you think faces are the key to social platforms. I thought that resonated with this audience and with this topic. Tell us a little bit more about why is faces is the most valuable part of social platforms?

Josh: We did these eye tests and heat map eye tests of Facebook and Twitter back seven or eight years ago. You could immediately see, just as humans we're programed to recognize a face and the shape of it, almost as babies. You could just see as people go read a Facebook feed when there's faces as the profile icon, people's eyes immediately gravitate towards that and then read the content. Then they go to the next one and then read the content. That's a really important thing.

When you're reading a set of content on Facebook or Twitter, you may not realize it but you're reading the name and identity of the person and then what they say. Then the name and identity of the next person and then what they say. That's what creates this very conversational, very human nature of the whole experience. That is very, very hard to replace. There's other content platforms where the name and identity isn't important. You can go to some of the anonymous platforms like at Yik Yak, and you realize that's just content there's no names or faces, and you actually read it very, very differently than when you're reading stuff on Twitter or Facebook, because those faces are just so key.

That identity is really what keeps people there. I meet people that I have been following on Twitter for a long time and I've seen these words next to their face for a long time and I feel like we really know each other. We're just having this dialog in conversation tied to their face. Then you look at Snapchat which came out more recently, and it was like the quintessential version of the face. It was like the most authentic faces. People make funny faces and awkward faces and faces before they've cleaned themselves up in the morning-

Evan: Which of those faces drive more traffic? That's interesting because the facial recognition, a lot of the computer vision we've discussed yesterday and today and in this sector. At what point can almost the algorithms start delivering on what we understand about sentiment analysis, and about what the face is doing? What the personality of the person with the face is doing?

Josh: It's interesting, the things that we've seen that drive the most traffic are when the face changes.

Evan: You mean in animation or over time?

Josh: No, over time. Actually like when you change your profile picture the single biggest way to get more clicks to your profile, or have more people looking at it, or actually getting your content get more likes and getting more comments is actually to change the profile picture, because all of a sudden we get used to seeing you look the same way in conversation. When you actually change it that's actually the biggest single trigger to doing it.

If you want to just take this trick and go get a bunch more likes or comments or retweets, or whatever social platform you're using, go and change your profile picture like every week or so. You'll be surprised at like, "Whoa, that really works?" When you get a new haircut and everybody that knows you like, "Nice, new haircut." It's exactly the same philosophy to do that.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Evan: I'm sure you can probably test also the type of comments you get if you put an unhappy face or depressed face, versus a laughing face. We've talked about some deals you've already done and towards the end of last year, you wrote about four or five topics that that you were interested in. For the audience's sake it was: live conversations, interests groups, better self expression, preserving all the that content we make, VR/AR. Now, you can imagine in additional to begin a great guy and a smart operator and investor, you are very focused opportunities that relate to everything in this space. Would you have add any to that list since that last five months, that is the newest, and latest, and greatest?

Josh: I think all these are really important, because I think we are more connected than we've ever been to people around us. Yet, I think we are somewhat less fulfilled by that than we've been in a long time. I feel like Facebook and Instagram and Twitter have gotten us so good at shouting at each other, even Snapchat stories to some sense, where you are looking at each other in their best moments when you're bored on the couch, or late at night, or in bed. We're not having as many intimate real experiences. A lot of the stuff around connecting people around the interest groups, connecting people on more live moments or live conversation is really about creating these much more human interactions about the stuff we care about that happens. That kind of is much more enriching versus seeing somebody else have a great time at a party and the time that you're really looking at that is when you're not having a great time, because that's when you have time to look at your phone.

I think there's a lot that's going to happen there. I think all of that is still stuff that I'm looking for, because we haven't seen anything really transform or break out. I'm also getting more excited about the Lily and some of the connected camera stuff that we're really about to see. I think we're moving in a world now where we're so used to having a phone in our pocket, but that's also still one too many steps. Pervasive cameras, and cameras that can get at brand new perspective we never could do-

Evan: So internet of eyes?

Josh: I love your phrasing on the internet of eyes. Have you guys seen the hover camera, which is a new demo and it will come live sometime soon? Watching that video it was like an actual camera that we would feel comfortable in here looking at and having it get a new perspective that we could never do without really expensive equipment.

Evan: Whether or not we're all going to have our personal flying cameras hovering outside, waiting for us to leave to follow us - it could be a good thing and a bad thing. It's a little odd. It’s almost like a parking lot of flying Lilys waiting for us.

Josh: I think it's really interesting. We talked about that, wouldn't it be great if they were silent. The problem with physics is it's actually still really hard to lift a piece of equipment and have it be silent. I think it's also hard to have a camera and a computer and all this stuff, with enough battery life that we don't worry about it crashing on our heads. I think we're a little bit away before it feels like something that we're like comfortable with around all the time.

I think we're going to start seeing us wanting many more moments captured so that we can actually be in the moment. The whole problem with the camera is you either do it with a tripod or you're never actually in the moment. Growing up there's like three pictures with my dad, because he was the family photographer. I think that's going to be a big trend.

Evan: One of the things that I would love is basically, you came from California, flew here, you went to the airport and took whatever transportation, you walked across town or do whatever, there were a thousand cameras that you passed. You don't control the access of that content, but what if you could? If it was say, "Oh Josh arrived here. These are three pictures. Do you want them?" Or they should be sent to you as a service. That identity tracking people look at internet of eyes sometimes negatively, sometimes positively. That's the positive side. I think we are in many places but why do we have to make a picture or have somebody else? There are pictures being captured which have value.

Josh: I think that's a great point. I think as we move into this world, at some level there's a massive debate about privacy but I think the reality is it's always this trade off between privacy and convenience. Or privacy and conductivity.

Evan: Isn't privacy dead?

Josh: At some level privacy is dead.

Evan: At most levels. It's pretty much dead.

Josh: In the United States I feel comfortable saying, "Look, there really isn't anything I'm doing that is so private that I worry about being used against me." I do respect that people from other environments and even in the political environment we may be entering here, people may actually get a lot more worried about actually the things that are naturally doing could be used against you. I don't think privacy and encryption is going to be dead until we really live in a world where we don't have governments or other criminal activity that can really-

Evan: That's always going to exist unfortunately. I mean I'm a very private person with many things and there are other aspects that I'm very public. We're here on stage and we're very social in sharing views online. There's aspects that I want to be private. I still believe because of the growth in technology and the evolution what's going on, privacy to what we know it is definitely dead.

Josh: I think it's fair to say that the expectation of true privacy in anything we do is pretty much dead.

I think there's a really important thing which is I think it's sort of like obfuscation is going to be the key. I don't really share much about the fact that I have a child on Twitter. If you really follow me on Twitter, you could barely detect that I have a kid, but if you follow me on Facebook or on Snapchat I'm very public on those platforms about my kid. I actually think we're going to get much more into selective sharing, and I think we're going to see a lot more happen.

Evan: That's a great question. Let's get the guys with the mics. We are going to start having questions. We have about seven minutes left, so anybody has questions raise their hand in a second. Regarding that question of sharing, you specifically said you don't share photos of your kid or talk about it on Twitter. Where do you like to share what photos? Do you have a mental filter or is it just a natural process that's evolved? What do you share on Snapchat versus Twitter?

Josh: On Twitter having worked at the company and sort of having lived this slightly more public life, I love talking about everything that goes on in technology that comes to my mind. I love talking about things around sports. I'm a huge Seattle Seahawks fan.

Evan: That's Twitter or is that Snapchat, or both?

Josh Elman Greylock LDV Vision Summit

Josh: This is mostly Twitter because I find that that's where I get the most engaging conversations around topics of technology and that, and a little bit of politics, and a little bit of world things that are happening. On Facebook and on Snapchat I'm much more creative about who my friends are and the people that I'm sharing with. I just share much more about life. Sometimes it's a little bit about work, but a lot of times it's more about my personal life and family. My mom likes just about everything I ever post on Facebook.

Evan: That's a good thing.

Josh: That's great thing.

Evan: If she didn't does it upset you?

Josh: No. Sometimes. She doesn't need to like everything that quickly.

Evan: That's a sensitive issue? Why did you not see it, did you not like it enough? Do you ask her?

Josh: No. It's just like “okay do other things other than Facebook all the time waiting for me to post.”

Evan: She's waiting for you. She wants to live vicariously your life.

Josh: I do think about, I just try to share real life there and on Twitter I just don't feel as comfortable living normal life publicly. Sometimes we all have our first world problems that we occasionally whine about. It's fun to do that with the right group of friends and it's embarrassing to do that on Twitter when everybody yells at you for like, "Why are you complaining about your Uber going too slow?"

Evan: Questions. Who's first? Over here, Rick. Who's got a mic? Okay. Go first.

Audience Question 1: Hi, Rick Smolan. I thought that LinkedIn's purchase of was brilliant. I was on an airplane the other day and you can spend six hours now learning Photoshop or whatever you want. I'm curious to how that decision was made? Likewise, all of a sudden LinkedIn's share price dropped in half on one day, which terrified the whole market. I'm just curious, it seemed to me that the opposite should have happened after, just curious if you could talk about those two?

Josh: Yeah. I'm not involved in LinkedIn other than being partners at Greylock with the founder - I'm just a shareholder. These comments are totally not associated with the company at all. LinkedIn from the very beginning was how do we actually help your professional life? We talked about this in 2004, in fact when I was there, it was how do we help you be a better professional? Obviously one of the ways was give you access to your network and give you access to your network's network, so you can actually reach people in a way that you didn't need it. Especially if you're talking about the hiring views case, we spent a lot of time talking about the expertise views case. If I want to find somebody who is an expert in Photoshop or is an expert in privacy, or is an expert in the legality of something, we really wanted LinkedIn to be the place you could do that.

As they've gotten more into empowering the economic development of everybody, they realized that giving you the platform to learn and create skills were incredibly important. Lynda, by far, was the leading platform. They had created so much great content. Had so many educational ways to create great skills. LinkedIn just saw it as move forward and trying to help professionals be better - they saw that as a great fit. I think so far, I hope, it's been working really well for them. I think that the future of a platform where LinkedIn knows more about me as a professional than anywhere else, it can really help be be even better, I think is going to be great.

In terms of the share price, I think the stock markets and the innovation that's happening at companies is sometimes out of whack, and doesn't always understand. It's LinkedIn's job to keep building a great business, and prove it.

Evan: The next one's over here.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Audience Question 2: Hi Josh. Adaora, Rothenberg Ventures. I'm just wondering about what your thoughts are on what some consider the VR/AR hype and what others don't?

Evan: Good question.

Josh: If you were to ask me in five years will everybody have a VR experience on a frequent basis, that's weekly or a couple times a month, or will AR be something that will be much more pervasive? I think the answer to that is probably, yes. I'd be really surprised if it's not. If you ask me if that's in two years. I don't know. I think VR is a great entertainment experience right now, but I haven't been comfortable in there where I've wanted to stay in for a long time. I think there's a lot of physical technology improvement still to happen. Let alone all the content and great experiencing to get created. I think we're over hyped in the short term, but probably not over hyped in the long term. Sort of in the same way the internet was in 1999. It was like, yes all these things will happen online. It will be incredible, but it might not happen in two years.

Evan: Which is one of the biggest challenges as an investor isn't it? When to invest in those people?

Josh: That's the thing. We've made I think one stealth VR investment. Our theory is small teams building things that can be really core building blocks that as it emerges can sort of scale up with it, gets us really much more excited that a bunch of companies that are spending a heck of a lot of money right now to build all the demos and everything else. I, by the way, think most of the way that most people are going to experience VR is going to be outside their own home over the next year and a half or two years. Very few people have the devices, but you'll go over to your friends house. I think even more there's going to be a lot physical locations you'll be able to go to that will become fun, kind of like what arcades used to be. Then in a few years it will get back into all of our homes.

Evan: Good analogy.

Audience Question 3: Josh, we in the media this spring really like to write about bots. What's it going to take for bots to go from hype to something that actually is relevant for people?

Josh: I think that's a great question about bots. We love talking about VR and bots because we know that they're new thing that are really important, and yet the number of hours people spend in messaging apps is going up exponentially every single year. Right now we're all really good at talking to our friends, and the first time that I want to talk to a friend now I don't think about phone calling them. I think about messaging them, and getting a message back. Yet, when we want to talk to businesses, we want to talk about information elsewhere you kind of have to call a business or go to their website or something else. Where actually messaging actually is often the best interface.

I think we still are confused about bots. Right now the (neuro-linguistic programming) NLP isn't quite there. We have very high expectations that if we say something, we expect a human on the other side to understand it and respond back. NLP is close but not quite there. None of these experiences are great and you start to learn this very cryptic language to interact with your bot. I think we are a few years out on bots being natural to represent everything. I think in the shorter term, we're going to see all this business behavior, all the phone (interactive voice response) IVR trees, press one for this, press two for this, way better done in a bot. Just open up messenger, type in the name of the business like Comcast, go through the exact phone tree, and you can actually everything do much faster than you could sitting on the phone.

I think we're going to see all of that happen in the short term. I also think we are going to see bots that are content delivery, whether it's your shipment just got mailed or a daily newsletter, or a breaking news alert pushed into your messaging, because that's where you are spending all your time and where you're doing all your content. The interactive bots I think are going to take a little bit longer to play out. I do think that much shorter term we're going to do a lot more in messaging than just message friends.

Josh Elman Greylock Partners LDV Vision Summit Startup Investing

Evan: All right a couple last question to end off. These are fun and challenging questions. Why do most entrepreneurs not succeed?

Josh: Because it's really, really, really hard to take a great idea, a great group of people, build it, hit the right market at the right time and get it in the hands. You have to think of it, instead of why don't most succeed, how the hell do the couple that really do, really succeed. It's so much luck and you can do all this hard work and get in a great position, and if you don't also get lucky at the same time to capitalize on all that hard work you've done, it's just doesn't happen. It doesn't always happen.

Evan: I didn't mean to be negative, but both of those is the right ways to look at it in relating to that, one of the questions I love asking all investors, because it's actionable I think for the audience. You speak to a lot of entrepreneurs, your favorite personality trait of an entrepreneur - one word answer and your hated personality trait of an entrepreneur - one word answer.

Josh: We're in a Vision Summit, so the number one word I would use is vision. Just somebody who can paint this picture of a world five years from now and just get me excited and intoxicated about that. The other word I like to use, I'll give you two, is learner. Somebody's who's constantly learning and processing new information and coming up with new theories on the world. The most hated, arrogant trait is, what's a non-listener. Somebody who doesn't listen and learn and interact that way.

Evan: Mine would be, as I said yesterday, passion and actually for me it's selfish. A CEO that's selfish. Some may like that because they're going to do it no matter what, but if they're not a team player and thinking about the whole ecosystem I think they're deemed to fail. Josh thank you very much. This is fantastic.

Josh: Thank you everybody. It's a pleasure.

The annual LDV Vision Summit will be occurring on May 24-25, 2017 at the SVA Theatre in New York, NY.


Applications of Computer Vision & AI Will be Life Transformative

(L to R) Jessi Hempel, Senior Writer at Wired, Liza Benson, Partner at StarVest Partners, Howard Morgan, Partner & Co-founder of First Round Capital, and Alex Iskold, Managing Director at TechStars © Robert Wright/LDV Vision Summit

(L to R) Jessi Hempel, Senior Writer at Wired, Liza Benson, Partner at StarVest Partners, Howard Morgan, Partner & Co-founder of First Round Capital, and Alex Iskold, Managing Director at TechStars © Robert Wright/LDV Vision Summit

Join us at the next annual LDV Vision Summit.  This panel, “Trends in Visual Technology Investing” is from our 2016 LDV Vision Summit. Moderator: Jessi Hempel, Senior Writer at Wired. Featuring: Liza Benson, Partner at StarVest Partners, Howard Morgan, Partner & Co-founder of First Round Capital, Alex Iskold, Managing Director at TechStars.

Jessi: I'm Jessi Hempel. I'm a senior writer at Wired and a big fan of Evan Nisselson of LDV and here with a panel of folks to discuss a topic which I'm sure is going to a surprise to all of you, visual technology investing opportunities. It's kind of what the day is about. I could introduce our panel but I'm actually going to let them each introduce themselves because I'd like you each to also give a couple of lines about the institutions you represent. I know we've got a late stage, an early stage, and an incubator guy. Alex, let's start with you.

Alex: I'm Alex Iskold. I'm managing director here at Techstars in New York. Techstars is I think a world class accelerator. We help early stage companies go faster by surrounding them with incredible mentors and helping them figure out the business and secure capital.

Jessi: Thanks.

Howard: Howard Morgan. First Round Capital, early stage ventures and we try to help them grow.

Liza: I am Liza Benson of StarVest Partners. We are an expansion stage venture fund focused on primarily B2B SaaS and technology enabled business services and typically we invest in companies between 2 and 20 million of revenue.

Jessi: We've kind of got the whole gambit represented on our stage. So, I want to start big and I want to start by framing this according to something that Evan has spoken and written quite a bit about, this idea of the Internet Of Eyes, this idea of the advancement of the Internet Of Things into a sort of a visual representation. The idea that all of the objects around us could be watching us and that unlocks opportunities and also terrifies me. Let's talk about what those opportunities could be. Let's imagine a little bit.

Howard: Well, obviously if it knows what I am doing, I've just woken up, it can take actions. It can see that I actually got out of bed or I didn't. In which case it'd have to wake me up again, you know and sort of go from there to try to understand my intent for things, if I'm willing to let it watch me all day. I'd like to be able to say, no, for a while. You know, turn it off, turn your eyes. Close your Internet Of Eyes for a little while and believe that it's actually closed for a while. Obviously, we've already invested in things that are doing that in shopping, and stores, and so on and the same thing is going to be true in factories.

(L to R) Jessi Hempel, Senior Writer at Wired, Liza Benson, Partner at StarVest Partners, Howard Morgan, Partner & Co-founder of First Round Capital, and Alex Iskold, Managing Director at TechStars © Robert Wright/LDV Vision Summit

(L to R) Jessi Hempel, Senior Writer at Wired, Liza Benson, Partner at StarVest Partners, Howard Morgan, Partner & Co-founder of First Round Capital, and Alex Iskold, Managing Director at TechStars © Robert Wright/LDV Vision Summit

Jessi: This idea that cameras are infiltrating our lives, are creeping into objects around us has been, it's been ongoing for a while. So, let's talk about the specific opportunities that 2016 unlocks. You know, I've had a camera in my iPhone for a while. What's new and maybe I'll jump to Alex because you're seeing a lot of what's new in the companies that come to you.

Alex: Yeah. I'm trying to figure it out. I have no idea, but in my camp this falls into separate categories. There's sensors. There's things that detect moving objects. There's things that have cognition and recognize things, and then there's algorithms that actually act on the information that sensors capture, and so I would bucket them all in slightly different categories and then kind of re-assemble to determine how it could be helpful. Let me start with an idea of applications for people who are visually impaired. Could we build something that is a camera in front of someone's door that would take pictures and actually recognize family members and caregivers and other people that come in and would tell you who is there? To me that's like a super interesting and pragmatic application. Way more useful than something following me and trying to be helpful where I don't need its help.

Jessi: For sure. For sure, but I guess my question is, are you seeing those kinds of companies right now?

Alex: I'm not running the IOT program. Jamie Fielding does. I do see some of these. So, the areas that I've been focused on, for example, is more of like frontier tech. Satellites, taking pictures and running some sort of computation. My focus is mostly on, not necessarily the sensors but, what do you do after you capture the data?

Jessi: Right. How much, when we're talking about sort of visual technology, how much is visual technology integrated into talking about technology that it's maybe not necessarily as useful to even put the framework visual on it and how much is unique to adding cameras to things?

Liza Benson, Partner at StarVest Partners © Robert Wright/LDV Vision Summit

Liza Benson, Partner at StarVest Partners © Robert Wright/LDV Vision Summit

Liza: I think that visual technology enables you to go into the real world so that you're not only dealing with online. I mean, in our fund we've invested in some things that are looking at in-store analytics because most shopping, you know, 80% of retail still happens in stores.

Jessi: Right.

Liza: So that's a really important thing that visual technology enables us to do that is separate from you now, eCommerce technologies or something of that sort.

Jessi: Tell us a little bit more about that, because you were talking about a company, RetailNext...tell us a little bit about that company.

Liza: What they're doing is using visual technology in stores to actually create the same sort of analytics that you'll see online, in a store. How do people dwell? How do people navigate around a store? All those things are done via cameras and it's some very interesting insights for large retailers out there.

Jessi: So, where are retailers on the spectrum of their capability of using it? Is this R&D at this point? Are they actually deploying it?

Liza: No, actually deploying it but I think you know, absolutely nascent in terms of penetration amongst retail stores to have this type of technology everywhere.

Jessi: What is it going to take for it to be deployed more widely? Is this a question of computer comfort?

Liza: Time, money. I think it's just a matter of time.

Jessi: Yeah. Got it. Howard?

Howard: Well, one of our companies which does software for drones and the cameras is involved in doing inspections of power lines, there are lots of sensors on power lines but if you want to see if an insulator is broken, you fly the drone by and you can see, “gee that insulator looks a little bit off” and you can zoom in and you can fly around. You can do all sorts of things visually to inspect things in out of the way places. Sensors alone, non-visual sensors alone, just don't give you enough information.

Tickets for our annual 2017 LDV Vision Summit are on sale
May 24-25 in NYC

Jessi: Fair enough, and the name of that company?

Howard: Airware.

Jessi: Airware. Cool. You had mentioned another company when we were talking earlier. Is it Nanotronics?

Howard: We talked earlier about Nanotronics which is able to take very high resolution images. Super high resolution, almost the kind of things you get from electron microscopes, captured in an optical methodology and then put it in real time. For areas that you want to have very high resolution, because the defects that they're trying to find visually are way too small for the naked eye to see and way too small for normal cameras to see. Visual technology is advancing dramatically.

Jessi: So, where are those companies in their lifespan right now?

Howard: Airware's a few years old. Has raised a lot of money, with us, at Kleiner Perkins and others and is selling. Nanotronics is actually close to break-even and is installed in a lot of companies already. So, they're pretty far along.

Jessi: Pretty far along? As you are in investors out there looking, what is the flag that tells you that something has promise in this area, that you might be interested in investing in it?

(Speaking) Alex Iskold, Managing Director of Techstars © Robert Wright/LDV Vision Summit

(Speaking) Alex Iskold, Managing Director of Techstars © Robert Wright/LDV Vision Summit

Alex: Just following up what Howard said, there's a company that just graduated Techstars folder that's applying drones to building inspection and it's paired sort of like it's human smarts and software and drones to all act together, because it's super dangerous to go up those buildings. Now, going back to your question, in my mind this is an phenomenal application of drones and vision. It's sort of solving very obvious real life problems. I don't know if it's necessarily a billion dollar company but it's certainly solving a real problem and improving people's lives and potentially saving people's lives. So, when I look at like these texts, I think the challenges is that some of this stuff is very far out there. Like, if it's research projects. So when we look at the companies we ask, can we help accelerate them now? Is it an investable business? I'm sure you know investors at different stages kind of looking at the tech through different lens.

Jessi: Right.

Alex: What's the maturity of this tech? I think it's an interesting question.

Liza: I think it's like anything else. It's efficiency and doing things that were either too expensive or too difficult to do before. You know some of the things that you were mentioning in terms of examining power lines. How could you possibly send a guy up on every pole to look at all these power lines? I mean, it's essentially economically impossible. We don't invest in drones. It sounds very exciting but, I think from an investor’s standpoint it's completely disruptive to the current way of doing things.

Jessi: Fair enough. So, what would you like to see improve so that some of this technology gets even better?

Alex: I don't know. I was thinking about the company that does sort of semantic object recognition and boundary detection and the application that I can see immediately is in augmented reality. Because if you were to build any kind of augmented reality you actually need very precise way to identify where you are and navigation. I think that is an example of something that's in the lab right now but me as an investor and a business person. I actually see relevant application and related vertical that maybe researchers don't necessarily see.  I think there's an advantage in these kind of forums and mixing us together.  

Something that's in the lab right now, me as an investor and a business person, I actually see the relevant application and related vertical that maybe researchers don't necessarily see. I think there's an advantage in these kind of forums and mixing us together.

-Alex Iskold, Managing Director of Techstars

Howard: Two things. One, I'm sort of old school. Better, faster, cheaper. That's kind of the usual mantra in technology investing, but also particularly when we're talking about vision we mostly think about human vision, but a lot of what's happening in agricultural inspection and in medical is outside the visual space. It's near infrared. It's ultraviolet. It's X-ray. So, expanding vision and those sensors are just getting to the point where they're cheap enough to be used. Visual sensors, because of camera phones are almost free.

Jessi: Right.

Howard: You can put cameras everywhere, but if you want near infrared or if you want ultra or you want X-ray those are still pretty expensive and we need to move those along.

Jessi: Were the cost to come down on those Howard, what could they begin to unlock?

Howard: Well, that's what we have entrepreneurs to dream up ...

Jessi: That's your job, guys. (To Audience)

Howard: Those things but think about X-ray technologies. If you had X-ray, there are a lot of military uses. You want to see in the buildings where the terrorists are. You want to see where the bombs are that are hidden in the walls. It's stuff like that, medical also. We heard about a company, looking at radiological images. Right now you have to go through an MRI. That's a pretty painful or at least uncomfortable thing. Why can't we get visual inspection of the human body the way we're doing visual inspection of other things without the same level? So, I think a lot of things could be interesting.

Jessi: Fair enough. I want to go back to what Alex said about AR. It struck me when Dijon was on stage just a little earlier. He said, “I want to like AR.” And he also spoke about 2D, 3D, 360 images as being sort of a bandaid fix until we got to VR and AR and I'm curious what each of your perspectives are on the future, the immediate and the far out future on these technologies.

Alex: I like both AR and VR in the sense that I see them as like fascinating and mind blowing. I quote my daughter. My older daughter tried on a VR headset and when she took it off she said “Daddy, virtual reality's much better than reality.” This is one of those sentences that gets etched into your brain and like until you go senile you're going to remember it.

Jessi: Alex, how old is your daughter?

Alex: She's 12. So, like my quick download on AR, we haven't invested in any of those companies but I'm very excited about it. In terms of the bandaids, there's a Techstars company called Sketchfab which is the largest repository of 3D models and they just released a VR viewer. Suddenly it becomes much more fascinating because you literally can go through a British museum and experience these artifacts and it's literally a mind blowing experience. So, I think that it's very true that 3D itself is a little "boring". You know, from the consumer perspective, but I think putting that stuff in VR becomes pretty incredible.

Howard: We have a company, Parcelable which is AR and selling into the oil well industry and basically the AR is used when you need hands free operation for people fixing machines. So, they're able to see the instructions. They're able to see what the next steps should be but instead of having a tablet or a book and when their hands are getting oily and dirty and they're holding tools, they're able to see it all right in front of them basically through the AR. Very effective usage and it started out using Google Glass. Now it's using other technologies, but AR has real uses in the industry sector when you need hands free operation of things and you need to see instructions.

Jessi: That makes sense to me when I think about the oil industry because I would think that they have the deep pockets to finance a tool like that and it could be very helpful to them. Are tools like that at any time in the near future going to be accessible to consumers in a useful or viable way?

Howard: You want to put together your Ikea furniture and you know, how many of us have done that? Right? Wouldn't it be nice to have that AR giving us the instructions and have the vision pick out which is that screw that it's telling me that i need to have? I think that is going to be viable for consumers.

Jessi: Sure, sure, sure, but just to push back on that for a second. I've piloted the HoloLens and I actually fixed a light switch with the help of an electrician and with the Hololens but I still had to have it connected to this big ole computer behind me to use and the field of vision was small. So, it felt like actually it was pretty far from reality.

Howard: Yeah, it's early. We're focused mostly on like B2B applications but I think for example going back to an earlier point and you know, situational training. Like firefighters, nurses. Do we believe in like physical training? Do we believe that it's better to kind of like be in the lab and try stuff out versus just read a book? Sure. Now, therefore once VR and AR achieves its sort of like level comparable with reality, becomes incredibly interesting. To your point though, I do think it's pretty far out. I wouldn't bet on it going mainstream in the next, three to five years. I mean, three to five would be probably most bullish that I would get.

Jessi: Liza, how closely are you paying attention to these trends?

(Speaking) Liza Benson, Partner at StarVest Capital Partners © Robert Wright/LDV Vision Summit

(Speaking) Liza Benson, Partner at StarVest Capital Partners © Robert Wright/LDV Vision Summit

Liza: Personally, I would love it because it seems like it'd replace a husband for anything that you really needed done...

Jessi: Totally true, right?

Liza: As a person in a personal level I would love it but we haven't really seen anything at the expansion stage in the B2B side. These all sound really early. Some of the things that you're imagining, Howard, would be very interesting to us because they're applications that are used in the non-consumer, in the B2B space. But it does sound a little far off but I would certainly be a user of it.

Jessi: Fair enough. I think we all would. Do we have any questions from the audience? Anybody out there? Okay. This is your two minute warning to come up with something good while we jump back in here. So, what happens then to, you know, as we move into a world of 3D images do all our images become 3D? What happens into 2D images?

Howard: I don't think they do. We've had 3D movies since the '50's and they've never taken off. We've had 3D stereoscopic images and except for certain uses they haven't been that critical. So I kind of am with Dijon on the sort of 3D thing. VR is a very different experience than looking at a 3D image. Back in the '90's MED'A Creations bought a company called Real Time Geometry that could take 3D photos of objects and we were putting 3D pictures on the web so that you could look at what you were going to buy and turn it around and go, and it just never became a very critical thing.

People are used to very much seeing things in 2D, through catalogs. We've been sort of growing up with that. The 3D need was less than we thought. The 3D movies, Avatar was an amazing success and then kind of nothing. There's IMAX 3D things that you see every now and then but the 3D experience is not, those movies do just as well or better, make more money in the non 3D versions than they do in the 3D versions, and except for some specialized areas like real estate where you want to do walkthroughs we just haven't see that.

Jessi: Got it. So Alex, do you think anything is different now? You think we are going to want 3D now? I take your point that 2D's not going anywhere but is 3D then really upon us? Because now that I think about it I totally remember Avatar. It feels like a lifetime ago.

Alex: It's situational, right? I think we're framing the question as 3D versus 2D. I feel like it depends. So, example, if you're an engineer and you have a 2D sketch versus if you create a 3D model, you put more inputs into a 3D model. I think that what's been a trend is that, notoriously, in enterprise but also in consumer, visualizations aren't always winning. Example, on Wall Street they still love their Excel. There's been so many companies trying to go “hey, I can visually show you the portfolio” and the traders don't care.

Now, “why?” is a totally different question but I think it depends on what the application is. In B2B VR and 3D is already starting to make input certainly in architecture. We have a company called IrisVR that is basically helping you walk through the building before it's even built, so that's helpful, but with consumers, to your point, I think VR is going to be, the best 3D we've got, once it works really well.

Jessi: So when you're thinking about visual technology where does a consumers sense of privacy fit in as you're piloting some of these? I mean you guys talked about some very cool early stage applications, for example, and they all require to me to be pretty comfortable with the fact that I'm being photographed.

Howard: Well, I brought up earlier this morning the Enemy Of The State movie or Dave Eggers' The Circle. You know, there's pretty dystopian views of what can happen. Even go to London and you're on camera pretty much anywhere you are in London. So the answer is, it's not that we have to be comfortable. The people who own these images are going to be controllable and use them in good ways and most people have proven over the last 20 years of the internet they're willing to give up quite a lot of privacy in return for pretty good other values. Other things that they get whether it's social networking, shopping, etc. I don't know how far that'll go.

Jessi: You mentioned some dystopian tales that are familiar to me. Can you guys think of any, any tales that are the opposite? That offer a hopeful view of what the world will be like once we're able to capture it all?

Howard: Well, you can see accidents. If the image recognition says, gee we just noticed an auto accident in this out of the way place you can dispatch emergency life saving there minutes before it would've ever been reported and you can see that in lots of possible areas. You can see it in hospitals where the patient monitoring as a visual characteristic. So, I think you can see a lot of positive things, they're just, they're not happening as quickly.

Jessi: We don't tend to write science fiction about them?

Alex: I mean to me, I'm just horrified at all the false positives. I'm an algorithms guys. I'm a math guy and I find it insanely unhelpful when things around us are trying to suggest us things and try to butt in. Because they're doing it in such a quirky and incorrect way, I think that yes, there's going to be incidents when we have algorithms that detect things that are going to save lives, but my question is, how many times this algorithm is going to blow the whistle and be incorrect? So, I think we need to give them a chance on one hand. On the other hand, I don't know if I want every millisecond of my life to be captured and analyzed in catalogs. I kind of want to be human still a little bit. I don't know. Maybe I'm getting old. Who knows?

Howard: Well, Gordon Bell's gone around with recording his entire life for the last about 20 years or so, at Microsoft and of course there are some issues in some states with the people he's photographing and particularly if he's recording them he has to get permission. But you know the old Greek philosophers talk about the examined life, if you're on camera 24/7 it's easy to examine and re-examine that life.

Jessi: The flip side of that is if you're on camera 24/7 it's easy to let go of the habit of actually examining your life because it's always there for you.

Howard: True, but look at cameras for the police. The trend now to put cameras on police and that's certainly impacting the way that they interact with some members of the public and probably for the better.

Alex: Jessi, also going back to your point, I actually think, for example, social media is chewing up a ton of our lives. I'm looking at my kids texting and all...

Jessi: They're not texting. They're Snapchatting.

Alex: They're texting. My kids don't have Snapchat yet but they're sending each other text messages and we don't butt in but I feel like there's a lot of time that goes into it and I feel like the content - I don't really look at what they're sending but I do see a lot of emojis flying around. I don't think the content is super intellectual. They read books. They still do a lot. My point is, when I write a new blog post, I want to know how many people like my stuff on Twitter and if the post I wrote like got a ton of likes on Linkedin. But that takes a lot of time. Now, if we're constantly recording ourselves, if everything is captured, do we actually have time to live our lives and what does it actually mean? Maybe that's like too meta but it sort of like ...

Jessi: In some Alex that's the question that these conversations always come back to and Liza you look like you had heard it before.

Alex: My pre-tweens, I guess spend their entire lives on and they're more concerned about how many likes they get that privacy. Privacy means absolutely nothing to them. They want to be broadcast. They want everyone to like them, to comment. That's so much more important to them than privacy.

Howard: The issue of living your life. I mean you go to a concert and how many phones are up in the air and people you know, not quite listening to the concert but showing that they're listening to the concert to the rest of the world and whether it's, and in that, that's a little bit of a frustration you know, where you really want to experience some things first by yourself and have your own feelings about them rather than being totally influenced by the crowd around you, and by particularly by the online crowd.

It's not that Snapchat is a documentary feature of an experience but that it is actually a facet of the experience. If I'm an entrepreneur how do I design into that world? Do I want to be designing in order to help people pull out of it or do I want to design in order to help them dive into it more?

-Jessi Hempel, Senior Writer at Wired

Jessi: Well, so there's also a school of thought that would say that actually that is now part of the experience. Snapper didn't happen. It's not that Snapchat is a documentary feature of an experience but that it is actually a facet of the experience. If I'm an entrepreneur how do I design into that world? Do I want to be designing in order to help people pull out of it or do I want to design in order to help them dive into it more?

Alex: Well, I think as an entrepreneur you have to go with the flow and you have to go into the future. It's not inconceivable to think that some of the things we will do will end up being dumb, but that's okay. We will self correct later. I am a deep believer in power and creativity of humanity but you know, you can't help to wonder, to Howard's point, if somebody's Snapchatting half of the concert, they're probably not watching it. That's their choice. As a founder if you build something that makes this go viral you'll make a lot of money so probably go do that.

Howard: Well, you could do both, right? You could make an ad blocker or you can make an ad, you know, a better ad network tool and they both will sell. So, you can pick which one you're most passionate about.

Jessi: Fair enough. Anything from the audience? Oh, right, way back here. Give us your name and affiliation quickly and then your question. There's a mic coming to you.

Audience Question 1: Thanks for speaking. My name is David Blutenthal, the CEO of Snapwave. We're a global community that combines photos and music for a more visually engaging music experiences and since we're talking about music a bit I'm wondering what you think of this as an investment space for engaging music experiences and certainly we're here today about imagery with any sort of a visual layer. Photos, VR, video, what have you.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Jessi: So, investments based for engaging musical experiences?

David: Correct. Thank you.

Howard: Well, look, the fact is that from sort of tweens-on, every few years there's a new music thing that catches on and becomes very very important to them and so it's a very ripe investment space. But it's a very fickle one and so it's a little difficult. At First Round we've mostly avoided them since because you know, we've seen how fickle it can be. You get a million users in year one and then you're back down to a hundred thousand in year two and so it's difficult for us to know which one's going to become the next Snapchat and which one's going to become the next Whisperer you know or Secret or whatever. You know.

Jessi: Fair enough. You can't even quite get the name, Secret or Whisperer. Is Whisperer even still around? Yeah. Alright, another question from the audience. Back here.

Audience Question 2: Hi, I'm Derek Hoiem, CEO of Reconstruct and we hear a lot about social media. I was wondering if there are any sectors that you think are heavily under-invested in?

Alex: Not social media. I mean, what sense? I think you guys are working on mind blowingly awesome next gen applications that's going to be totally life changing. I think vision algorithms, machine learning, AI, all of these things, not just for the sake of algorithm but when applied to real life businesses are going to be completely life transformative and already have been I think. So, like any one of those areas I think is highly interesting at least to me personally.

Jessi: Cool. Let's take a question right over here.

Audience Question 3: Taking about RetailNext, a company which can zoom inside a store and then give some actionable information, how do you connect that with increase in revenue? Is it just based on some past experience or what is the connection? How do you sell these kind of things to large corporates who would want to use them?

Liza: Well, I mean, they're interested in many facets of the consumer experience, whether or not they had good or bad customer service, even in terms of merchandising - you know, what was interesting to the consumer: What did they dwell on most? What caught their eye? Those types of things are very difficult to ascertain without seeing the movement around the store. Those are the types of things that we've seen that retailers are very interested in.

Audience Question 3: Yeah, but how do you put a value to that, is my question. If i were to sell it to a company right now, I would say, how much do you want this for? Right? For free it's great but for a million dollars, where do you draw that price point?

Liza: You know, I don't. I haven't seen their latest ROI model but  I think from a merchandising perspective if you have an idea as to what's working vs. what's not working, what store layouts work as opposed to store layouts that don't work, that's huge in terms of uplift of sales.

Jessi: Okay, and there was a question behind you. Right back here. Thank you.

Audience Question 4: Thank you. My name's Jacob Loewenstein. I run MIT's VR and AR community. I'm curious. It sounds like when it comes to investing in social media there's a huge advantage if you can predict shifts in societal values. So, suddenly people value publicizing themselves, then perhaps suddenly people prefer privacy or sharing only among a niche audience. I'm curious if any of you predict or see an upcoming shift in values that might tip your hat or give you a clue as to where to invest in next?

Howard Morgan, Partner & Co-founder of First Round Capital © Robert Wright/LDV Vision Summit

Howard Morgan, Partner & Co-founder of First Round Capital © Robert Wright/LDV Vision Summit

Howard: That's a terrific question because what we do see is that roughly every two years, the tweens are the predictor. They're the ones who do the next thing and the difference today is that the kids who are becoming tweens today have pretty much been on iPads for five years and so they've had a much different growth experience then even the generation two or three years before them. They're sharing differently and I do think that, but I don't know that I can see what to predict yet but there's definitely some differences happening. One of them is the importance of music to them. You know, they're not texting anymore. As Jessie said, they're Snapchatting, they're, and they have the time to figure out ...

You know, New Yorker had a fantastic cartoon last week which showed a patient in the doctor's chair with his head exploded. It said, I was just another 40 year old trying to learn Snapchat. What you have as a tween, is time to explore an app that we as adults in general don't have, and Snapchat has no instructions  so you have to explore it to figure out what to do with it. We don't have the time to do that but this new generation will have even more language at their disposal to figure out what these things might do. So they'll explore them faster. So, I think we'll see a faster pace.

Jessi: Just to build on that, one thing that I'm very curious about is the visual language that's emerging, that this new, young generation is pioneering. I'm not fluent in reading it, let alone writing it but it feels to me that people assume that you can understand the language because it harkens back to pictography. If you could see pictures on the cave wall then certainly you can make and share images. But as I look at that language, in particular emerging on the Snapchat platform, I think that we begin to move towards a world in which there's going to be a nuanced way of writing in images that carries the same nuance that writing in language carried in the last century and we're going to have to somehow get ahead of the coding on it.

Howard: IMHO. I mean, we had to get through that, that battle with the text world which changed what we were writing right, and learn that language. And it is a new language between the way you're doing things in Snapchat, what things you're putting on people, the emojis you're putting in tech, but it's not unlearnable. It's just that kids always want a language that they can keep from their parents, from their elders for a little while.

Jessi: Fair enough, and I guess in that paradigm I qualify as the elder?

Howard: Yeah.

Jessi: Yeah, thanks. So I want, I don't want to leave this question though because it's a great question. I want to go back to sort of what cultural shifts you might see just up ahead and where they might come from.

Liza: We don't invest in consumer just because it is so fickle and we're more interested in more business models that have a longer lasting. But I think it's extremely challenging, just having tweens, seeing how they communicate and what they do and what's hot now and not. I mean, it's very even difficult as a parent to make sure that they're being safe, to keep up what they're doing safe, let alone investing in it. So, anyone who does that, hats off.

Jessi: Yeah. Fair enough.

Howard: I think the other big cultural shift is the, and I don't want to say it's attention span, but the byte size of what they're doing. You know, 50 years ago I would write letters. Multiple page letters to people and certain in social interactions. You know, what's the longest thing? I wrote a blog post about eight years ago saying that the Gresham's law of blogging was the cheap tweets would take over from the deer of blogging. Blogging takes a long time so even though the mediums are doing reasonably well, so much more is happening in this short form whether it's 140 characters or 500 characters. It's not 500 words, and that's been a real difference and that's one of the biggest differences - everything has to be said or thought through in something that could be consumed in one or two minutes.

Jessi: One or two minutes. Which is exactly how much time I have to sum up all that we have talked about in the last 40. Thank you guys so much for joining us for this conversation.

The annual LDV Vision Summit will be occurring on May 24-25, 2017 at the SVA Theatre in New York, NY.

March 31 Deadline for Competitions at LDV Vision Summit

Startup Competition:
Visual technology startups with less than $2M in funding are invited to apply to our annual LDV Vision Summit Startup Competition.

Entrepreneurial Computer Vision Challenge:
Students, professors, PhDs, enthusiasts and hackers are invited to apply to showcase brilliant computer vision projects at our Entrepreneurial Computer Vision Challenge (ECVC) by March 31.

To apply, simply fill out a form with a link to a pitch deck or recent project.

Finalists present at our 2017 LDV Vision Summit to over 500 investors, startups, and technologists as well as incredible judges like Albert Wenger of Union Square Ventures, Josh Kopelman of First Round Capital, Marvin Ziao of 500 Startups, Jenny Fielding of TechStars, Serge Belongie of Cornell Tech, and many more...

Finalists from prior years have raised funding, found partners, and hired new talent from presenting at our Vision Summit.

Subfinalists receive remote mentoring from LDV General Partner, Evan Nisselson then have the opportunity to present in NYC on May 22 to our experts in computer vision & entrepreneurship and receive complimentary tickets our Vision Summit.

Examples of visual technologies: businesses empowering photography, videography, medical imaging, analytics, robotics, satellite imaging, augmented reality, virtual reality, autonomous cars, media and entertainment, gesture recognition, search, advertising, cameras, e-commerce, sentiment analysis, and much more.

[Photograph of judges from 2016 Startup Competition including, in no particular order, Jessi Hempel, Senior Writer at Wired, Christina Bechhold, Investor at Samsung, Brian Cohen, Chairman of NY Angels, Taylor Davidson, Unstructured Ventures, Barin Nahvi Rovzar, Executive Director of R&D and Strategy at Hearst, Adaora Udoji, Chief Storyteller at Rothenberg Ventures, and others such as Josh Elman, Greylock, David Galvin, Watson Ecosystem at IBM Ventures, Jason Rosenthal, CEO at Lytro, Steve Schlafman, Principal at RRE Ventures, Alex Iskold, Managing Director at Techstars, Taylor Davidson, Unstructured Ventures, Justin Mitchell, Founding Partner at A# Capital, Richard Tapalaga, Investment Manager at Qualcomm Ventures. © Robert Wright/LDV Vision Summit]

Carbon Robotics is Growing an Unstoppable Team to Create Innovative Products After Win at LDV Vision Summit Startup Competition

Rosanna Myers, CEO & Co-founder of Carbon Robotics © Robert Wright/LDV Vision Summit

Rosanna Myers, CEO & Co-founder of Carbon Robotics © Robert Wright/LDV Vision Summit

The LDV Vision Summit is coming up on May 24-25, 2017 in New York. Through March 31 we are collecting applications to the Entrepreneurial Computer Vision Challenge and the Startup Competition.

Rosanna Myers, CEO & Co-founder of Carbon Robotics, won our LDV Startup Competition in 2016 and we had a chance to speak with her this March about their past year:

Besides winning the LDV Startup Competition, you won many awards in 2016 - Forbes 30 under 30 in Manufacturing, Best Demo Pit at Launch, CES Best Startup, Robotics Business Review Top 50 Companies in World - what have been the keys to your success over the last year?
It was a good year! I think the key to our success so far has been pretty straightforward – we focused on solving a hard problem that makes an important impact. My cofounder and I actually made that call very early on. We decided to not waste time on trivial matters, but instead to leverage our talents to help people.

From the beginning, we knew exactly who we wanted to help and why it was important. Of course, strategies and tactics change, but clarity around the core mission is what inspired us to overcome challenges. We had a lot of fun showcasing our work, but honestly I think that was the easy part.

What challenges have you overcome to achieve those successes?
When we started the company, we were told it wasn’t possible to build such a high performance robotic arm at our target price point. When we asked why, we got answers like “well, based on our distributor’s inventory and known configurations, it would be too expensive” or, my personal favorite, “our contractors told us it’s not possible.” We knew it was possible, but we also knew we had to get creative.  

For months, it was just pure grind. We dug deeply into the supply chain to learn what was easy to customize, wrote complex control software to correct for cheaper hardware, and drew inspiration from unusual disciplines. In the end, we successfully created a device that was within spec and about 10x cheaper – which felt incredible to accomplish.

An article in Inc. quoted you saying “Impossible is a mindset, too often the answers we got weren't at all based on physics, they were based on precedent - that is an important distinction.” Do you think that out-of-the-box thinking embodies a major characteristic of Carbon Robotics?
It’s essential. Derivative thinking leads to derivative products. When hiring, we screen for people who constantly challenge assumptions and make unorthodox connections, then give them freedom and support to invent. So far, our pickiness has paid off. Everyone we’ve hired is the best at what they do, but also highly cross-functional and laser-focused on shipping product.  

Being a young startup is quite clarifying in that regard – when you have to pull off the unreasonable, you need a team that’s unstoppable.  

Are you hiring at the moment/what positions looking for?
We are! Right now, we’re looking for Computer Vision and Robotics Software Engineers on AngelList.

On the CV side, we want people who understand the hard math behind behind low-level algorithm development (rather than just implementing open tools), and who have a system-level grasp of reconstruction pipelines. (A decent proxy is probably working in C++ rather than Python.) For software, we need people with deep robotics backgrounds who can translate real-world tasks into flexible behaviors.

In all cases, we’re looking for people who want to democratize robotics. A lot of what we’re doing is taking really hard problems and then building tools so that people without specialized knowledge can tackle them. We’re asking you to help us steal fire from the gods and bring it to mankind.

Why do you believe robotics is such an interesting application of Computer Vision right now? How is Carbon Robotics using CV to disrupt the traditional robotics manufacturing sector?
Robotics is one of the most high-impact applications for Computer Vision. Robotic arms today are largely dumb and blind, which majorly hamstrings their utility. Giving them eyes and a brain to better better understand their environment is key to catalyzing adoption. We’re at this amazing point in time where a whole bunch of developments in hardware and software are converging to create something damn close to magic.

There’s also enormous potential to impact people’s quality of life – everything from automating dangerous tasks to enabling assistive devices to creating the building blocks of an entirely new medium. In many ways, robotics today is like computers in the 80s or the internet in the 90s. There’s a big appetite for the first applications and a much more revolutionary shift underway.

What are you looking to accomplish in 2017?
I can’t reveal too much of what we’ve been working on, but we’ll have some exciting announcements later in the year.

Do you have any recommendations for startups in their seed stage who are applying to the LDV Startup Competition?
Definitely apply! We weren’t sure if we should apply since we were already fundraising and weren’t sure if we’d fit the criteria, but I’m so glad we did. The summit had the perfect blend of smart attendees and an intimate format, which made it easy to make meaningful connections.

Apply to our annual LDV Startup Competition by March 31, 2017.

Panel of Judges: (L to R) Jessi Hempel, Senior Writer at Wired, Christina Bechhold, Investor at Samsung, Brian Cohen, Chairman of NY Angels, Taylor Davidson, Unstructured Ventures, Barin Nahvi Rovzar, Executive Director of R&D and Strategy at Hearst, Adaora Udoji, Chief Storyteller at Rothenberg Ventures, and others such as Josh Elman, Greylock, David Galvin, Watson Ecosystem at IBM Ventures, Jason Rosenthal, CEO at Lytro, Steve Schlafman, Principal at RRE Ventures, Alex Iskold, Managing Director at Techstars, Taylor Davidson, Unstructured Ventures, Justin Mitchell, Founding Partner at A# Capital, Richard Tapalaga, Investment Manager at Qualcomm Ventures (in no particular order). © Robert Wright/LDV Vision Summit

Panel of Judges: (L to R) Jessi Hempel, Senior Writer at Wired, Christina Bechhold, Investor at Samsung, Brian Cohen, Chairman of NY Angels, Taylor Davidson, Unstructured Ventures, Barin Nahvi Rovzar, Executive Director of R&D and Strategy at Hearst, Adaora Udoji, Chief Storyteller at Rothenberg Ventures, and others such as Josh Elman, Greylock, David Galvin, Watson Ecosystem at IBM Ventures, Jason Rosenthal, CEO at Lytro, Steve Schlafman, Principal at RRE Ventures, Alex Iskold, Managing Director at Techstars, Taylor Davidson, Unstructured Ventures, Justin Mitchell, Founding Partner at A# Capital, Richard Tapalaga, Investment Manager at Qualcomm Ventures (in no particular order). © Robert Wright/LDV Vision Summit

How has winning the LDV Vision Summit Startup Competition 2016 had a lasting impact on Carbon Robotics?
I think the network is fantastic. We made several lasting connections from the LDV Vision Summit community and Evan has been a great mentor to us.

Applications to the 2017 ECVC and Startup Competition at the LDV Vision Summit are due by March 31, apply now. Our next LDV Vision Summit will take place on May 24 & 25 in NYC. (Early bird tickets at 80% discount are available until March 31).

Computers Still Have a Long Way to Go on Visual Reasoning According to Larry Zitnick of Facebook

Larry Zitnick, AI Research, Research Lead, Facebook

Larry Zitnick, AI Research, Research Lead, Facebook

Join us at the next annual LDV Vision Summit.  This is transcript of the keynote by Larry Zitnick, AI Research Lead at Facebook, “A Visual Stepping Stone to Artificial Intelligence” from our 2016 LDV Vision Summit.

Larry Zitnick got his PhD at CMU and after that he went to Microsoft Research where he established an excellent track record in object recognition and other parts of computer vision. Now he's at Facebook and, again, he's doing world class research. He's the leader of a very influential project called COCO which is Common Objects in Context and he also works at the intersection of images and language - which is an exciting area involving things like visual question answering.

At Facebook AI Research, what we try to do is advance state of the art AI in collaboration with the rest of community. Given that my background is in computer vision, I find myself thinking a lot, what is the role of computer vision in AI? That's what I want to talk about today.

Imagine that you could go back to 1984 and you could find yourself a graduate student and you said “here, read this paper. It's got a really cool title, it's called Neocognitron and if you want to solve recognition, all you need to do is three things. You need to figure out how to learn weights, you need to collect a huge amount of data, and you need to find a really fast computer and you would solve recognition.”

Now graduate students being graduate students, they would go in and they'd look at, they'd look at the weights part and they'd look at algorithms and they'd say “I would want to solve the algorithm.” That's exactly what they did.

They went and solved the algorithm. They developed Backprop. Now, graduate students being graduate students, say, “now all we have to do is collect more data.”

That took a lot longer unfortunately. That took maybe another 30 years to finally collect enough data to then to do the learning.

Now we're in 2016 and we find ourselves asking the question, how are we going to solve AI? Which direction do we need to go in to solve AI? Well, the answer is obvious. All we need is more data, more compute and apply it to Backprop. This is exactly what we've done the last few years. We took a problem which is seemingly AI complete, image captioning, and we basically took tons of data, tons of compute and we ran it on these images and we got some really amazing results.

A man riding a wave on a surfboard in the water. Great image, great caption. A giraffe standing in the grass next to a tree. Again, fantastic caption for this image and I think a lot of people were really excited about this. Then after a little bit of introspection, we began to realize, this doesn't work if you don't have similar images in the dataset. If the images are too unique, suddenly the algorithm starts falling apart.

How many people have read the paper, Building Machines that Learn and Think Like People?It is a paper from NYU, MIT, and Harvard. It's great read. If you haven't read it yet, please read it. They took this state of the art image caption generator and they ran these images through it and they got a man riding a motorcycle on the beach. Yeah, kind of correct, but kind of missed the point all together. You see this over and over again. If the test image is from the head, you nail it. If it's from the tail, it's a little bit unique and it completely falls apart. More data is not the solution.

Then as computer vision people you might think to yourself, “we want to solve AI which direction should we push in?” Let's just make our recognition problems harder. What's something really hard that we can try to recognize? Mirrors. You have a mirror like in this image here, we nail it, we can do a really good job. Right? What about this image? Can you detect the mirror in this image? In order to do this, you have to have a much more deep understanding about the world. You need to understand how selfies are actually taken. Some of the older people here might not get it.

Unfortunately finding really difficult images like this is really hard. It's already hard enough to create datasets, so I don't think this is the right direction either. If we want to solve AI, which direction should we go in? That's what I want to talk about. There's two things we need to do.

The first thing is learning. There's many different types of learning. There's the very friendly nice type of learning, which is supervised. Where you get data, it's complete. It doesn't have any noise in it, it's fantastic. It's our favorite friend.

You have semi-supervised learning which is a little lazy, let's say, where you don't always get the data that you want. You have reinforcement learning which is always trying to give you money or give you rewards for doing the right thing. Then you have unsupervised learning which is really, really annoying. There is a huge amount of unsupervised learning, we have a huge amount of data but we don't have any labels for it.

Supervised learning. This is our bread and butter. This is why we've had the advances we've had so far because of huge supervised datasets like Imagenet and more recently COCO.

-Larry Zitnick

Supervised learning. This is our bread and butter. This is why we've had the advances we've had so far because of huge supervised datasets like Imagenet and more recently COCO. But, creating these datasets is incredibly difficult and frustrating. Just ask anybody here who's tried to do this. Ask the graduate students. It's really hard to get graduate students to work on problems like this.

Semi-supervised learning. Let me give you an example of semi-supervised learning and why this is tough. If you want to learn a concept such as yellow and you have a bunch of image captions, you can identify images which have yellow in them and the caption actually mentions yellow. But there are other times where almost the entire image is yellow yet the caption doesn't actually mention the fact that there is any yellow in the image.

Now you can learn really cool things using data like this. You can learn whether people actually say a certain concept is present in an image or not. We can learn a classifier which says ‘hey, when there's a fence in the background of a soccer game, nobody ever mentions a fence.’ Where as if there's a fence that is blocking a bear from eating you, somebody is going to be mentioning that fence.

Reinforcement learning. Now reinforcement learning in computer vision is kind of a weird mismatch right now because in reinforcement learning, generally, you have some sort of interactive environment and it's hard to do that with the static datasets that we're used too. What you find is a lot of reinforcement learning is being done with gaming type interfaces or with interactive interfaces. I think it's still a really exciting area to be looking into.

Then you have unsupervised learning. This is kind of the elephant in the room because there's a huge amount of data. If we can figure out a way to learn features using this unsupervised data, we could do amazing things. People have been trying to propose all sorts of different tasks they can do. And they see what type of visual features they can learn from doing these sort of tasks. It has worked kind of okay but still not as good as supervision.

Tickets for our annual 2017 LDV Vision Summit are on sale
May 24-25 in NYC

The next thing I want to talk about is reasoning, and specifically visual reasoning but not about the task of reasoning itself. What I want to give you is a sense for how difficult this problem is, and where we are as a community in solving reasoning.

Very recently, there's a paper that proposed the following tasks. You're given three statements. Mary went into the hallway. John moved to the bathroom. Mary traveled to the kitchen. You have to answer a very simple question, where is Mary? Computers have a really hard time of answering that question. Let's let that sink in.

This is trivial. Really trivial yet computers can not do it because they can't understand what these statements are actually saying - and people are worried about AI taking over the world.

If you're going to do reasoning, you need to be able to predict the world. You need to be able to see how things are going to be able to progress into the future. Where are we right now? Right now, when we think about prediction, we're dealing with these sort of baby tasks where we have, for instance, we have a bunch of blocks that are stacked up on top of each other. All you have to do is predict, are those blocks going to fall over or not? It's incredibly simple. If they do fall over, which way are they going to fall? This is something that can be done by a baby. Yet this is a state of the art in the research right now. You think about more complex prediction tasks where we model human behavior, where we have to simulate driving down roads and that sort of thing. We still have a long way to go.

Data. This is something that's interesting because when you think about these AI tasks that we're looking at, a lot of them are dealing at a higher level of reasoning. They're not looking at pixel level things. It doesn't matter if you start with real data or you start with more abstract data. There's been a lot more work in looking at abstract scenes. Cartoon scenes. Looking at Atari games. Looking at Minecraft. These other areas where we can isolate the reasoning problem that we want to explore without having to worry about this messiness, that is the recognition problem in the real world.

Finally, even if we've solved reasoning, how would we know that we solved it? We all know the problems with the Turing Test and how incredibly frustrating it is to measure intelligence based upon the Turing Test because there are all sorts of different ways of gaming it. One of the more recent things that we've proposed is to use visual question answering as a sort of Turing Test for reasoning and vision. What you do is you have an image, you have a question, and it combines the visual recognition - that is now beginning to work better. If you can do both of them well then you can do good on the VQA task. So far, what we've seen, is progress in this task hasn't been moving that quickly and I think a lot of it is due to the fact that reasoning is not progressing that quickly unlike recognition.

Looking forward. Up until 2016 we've made incredible strides in recognition. I said before that recognition is solved, but recognition is not solved, there is still a lot more work to be done. Only compared to 1984, is it essentially solved. Now, if we actually want to solve AI, we need to turn. We can't just keep pushing on recognition. We can't keep thinking that AI is recognition. We need to start thinking of AI as AI and start solving these problems that have been ignored over the last thirty years.

If you look at reasoning in particular, we're just at the beginning stages of this and for me this is what's so exciting. There's still so much work to be done. High level, what's interesting is there's not a clear road map. We don't know how reasoning is going to be solved. We don't know how learning is going to be solved. We don't know how we're going to crack the unsupervised learning problem and because of this it's hard to give a time frame.

One thing I can guarantee is, as we explore AI, computer vision is going to be our playground for research in this area. Thank you.

The annual LDV Vision Summit will be occurring on May 24-25, 2017 at the SVA Theatre in New York, NY.

Are OTT and Live the Hottest Features of Video Streaming?

Patricia Hadden, SVP of NBCUniversal © Robert Wright/LDV Vision Summit

Patricia Hadden, SVP of NBCUniversal © Robert Wright/LDV Vision Summit

Join us at the next annual LDV Vision Summit.  This transcript of the panel, “Is OTT the New Black” is from our 2016 LDV Vision Summit. Moderator: Rebecca Paoletti, CEO of Cakeworks featuring: Ed Laczynski, CEO of Zype; Patricia Hadden, SVP NBCUniversal; Steve Davis, VP & GM East of Ooyala.

Rebecca: Just to start off, if you guys could each give one sentence about what your company does and one sentence about what you do at your company. You can start, Ed.

Ed: Sure. Hi, I'm Ed Laczynski. I am CEO and founder of Zype. We are a direct to consumer video business platform, and we make it easy for content owners and brands to build and grow OTT streaming businesses.

Patricia: Hi, I'm Patricia. I work for NBC Universal. We create content. Within in NBCU, I work specifically for the digital enterprise group, and so we're responsible for creating direct to consumer SVOD services that complement the NBCU portfolio.

Steve: I'm Steve Davis. I work for Ooyala. Ooyala is a digital video platform technology company based in Silicon Valley and New York and all over the world, and power some of the largest broadcasters and operators, their entire OTT platform, which we'll talk about, I'm sure, today.

Rebecca: Yes, we will. Okay, so we've had some nice chats about “paradigm shifts.” There are lot of these shifts happening in digital, digital media, and particularly in video, especially in the last, I would say, nine to 12 months. Obviously the proliferation of technology and new devices has really changed the way consumer are dealing with video and they way advertiser and publishers are also dealing with video and the responsibilities there. Digital video companies and digital media businesses have been looking at OTT and have been asking a lot of questions about OTT.

I, personally, at my company, CakeWorks, which is a boutique digital video agency, and we sort of problem solve across the spectrum of digital video. There was this moment last July when I turned to my co-founder and I was like, "Seriously, OTT is the new black." It started because there was a week, you think it's the summer, July, people are supposed to be on vacation, we were getting overwhelming inbound questions, what is OTT? I need OTT, I'm in Hollywood, I'm building an OTT app, I have to have OTT. I'd look at my staff, and we would say, "Do they even know what OTT is? Do they know what they're asking for and what they're asking to be built?"

Before we get into OTT, this is a very acronym heavy business, digital video, we have SVOD, AVOD, OVP, CPMs, CPVs, and I'm pretty sure we're all OC about OTT, but so far as that is, let's define it, because I'm not sure that everybody in the room is particularly familiar with it, and certainly the industry that we work in actually doesn't understand quite all the way what OTT is. OTT stands for Over The Top, but would each of you like to take a one sentence on what is OTT actually?

Ed: Sure, OTT is distribution of content without a middleman.

Patricia: Yup. It is, at least in the broadcasting world, it is a being able to deliver video or content or entertainment directly to a consumer without a MSO. I'm going to throw in acronym to describe a MSO.

Rebecca: MSO, MVPD, these are the big cable operators, so the Comcast, the Time Warner Cables, the Telstras of the world.

Patricia: That's right.

Steve: Yeah, it's a disruptive shift on how to reach consumers, so just piling on what these two just said. It's another way for a consumer to get content without having to be beholden to the large cable companies.

Rebecca: This is important, because there's a whole world of people that think any app is OTT, so just because you have an app on your iPhone, that is not an OTT experience and it doesn't come with any of the issues that we are working with in the OTT space. Okay, so for you all out there, how many of you do not have cable TV at home?

Ed: I do not.

Rebecca: Well, there you go, Patricia, there's your answer. How many of you pay for Hulu? Not steal your roommates, your neighbors, pay actually for Hulu? How many of you watch Netflix more than once a month? More than once a week? How many of you have a Xbox? Roku? Apple TV? How many of you have all three?

Ed: Sorry, I'm kind of in the business.

Patricia: I was like, "That's odd."

Rebecca: Ooyala also. With these devices, do you watch TV or video on those devices at home more than once a month? Streaming video on devices more than once a week? Every day? Awesome, we have a very OTT savvy room, which is excellent. We, I think, are at the proverbial inflection point on this whole distribution of video content or do you think we're well beyond it when it comes to OTT? Steve, this is like you've been living the dream, the technology front of this.

Tickets for our annual 2017 LDV Vision Summit are on sale
May 24-25 in NYC

Steve: Yeah, this is the last 18 months of life. I would start with we've come a long way in 18 months. Most of the opportunities that our company has talked to 18 months ago folks were asking us to define it for them. We need to OTT, but we're not quite sure what the hell it is, 18 months later now it's much more of I don't want to be live at apps, but can you help us describe the best approach to get an OTT offering and should we do it? Where we are now is much different than we were 18 months ago? We're at not quite the tipping point, now we're at the it's much more being adopted on a larger scale, where every single company at least is having a strategy, whether it's good or bad, is having a strategy about OTT to go to market.

Rebecca: Let's define that. Every company, meaning every media creator, publisher in possession of premium video content?

Steve: Well said. I mean broadcasters, operators, digital publishers, everyone realizes that a direct to consumer or a new way to reach that consumer is a must, they just might know the best approach to do it yet; where 18 months ago it was much more questioning, “Steve, can you guys come in and help explain what OTT is and what we should be doing.” It's gone, in 18 months. It is the fastest adoption technology I've seen, probably, in my career.

Rebecca: Patricia, from the content perspective when do you feel like you guys made this decision? Like we want to go in this direction, we kind of understand the ad model, we understand the subscription model, how did you set out to do Seeso? Actually, why don't you tell everybody what Seeso is?

Patricia: What Seeso is? Let me explain kind of the genesis of how we thought about it first. We wanted to solve a consumer problem, not a business problem, so what we were seeing is these big OTT players were really great services if you knew exactly what you wanted to watch or if you were in the middle of a binge. If you just want to watch something good, you kind of sit there and surf for 20 minutes and then you get frustrated and then you leave. The reason is because they've become this kind of supermarket of content, they have kids and horror and everything in between, and it's kind of the Costco effect. You guys have ever gone to Costco, I have three kids at home, so I go all the time, it's overwhelming. What we wanted to do is to essentially be the anti-Costco. We wanted to be the neighborhood restaurant, the neighborhood café.

Patricia: What Seeso is, it's an SVOD service, which is a subscription video on demand. It's 3.99 a month, it's ad free, and we wanted to be very, very specific and go with one niche genre, and that is comedy. We did a ton of research and we did a lot of ethnographies and realized that comedy, to Trina's earlier point, is this kind of universal language, but there isn't one place where you can find all the comedy you want. I think we have pretty good internet service here, if you guys have not subscribed yet, just feel free to go on your app and download Seeso, it's S-E-E-S-O. It has, really, there's so many kind of nuances and faucets within comedy, so we have standup and sketch and animation and scripted and by the end of the year we'll have 20 original series.

Rebecca: Thank you. Okay, so enters Ed, who has a new startup that is, I lovingly call, OTT in a box. Where did you guys start from and what problem did you think you were solving when you set out to do this?

Ed Laczynski, CEO of Zype © Robert Wright/LDV Vision Summit

Ed Laczynski, CEO of Zype © Robert Wright/LDV Vision Summit

Ed: One thing that resonated with me from Trina's presentation was the idea of this identity crisis in video and I kind of think that-

Rebecca: We actually saw that slide, right?

Ed: Yeah, before the click fast forward, and I thought about that in relationship to what the industry is going through now and it's kind a way to answer your question. I think that there is sort of an identity crisis as content owners are trying to figure out are they direct to consumer businesses, are they simply content owners that are part of an existing supply chain and continue to do business that way. In those traditional ways, and for us, at Zype, we saw a confluence of market changes. There was the availability of bandwidth, availability of devices, the consumers had all these devices in their hands and in their homes, and the advent of cloud computing - that's a technology for developers like us. That we can scale up a business that traditionally would require a tremendous amount of capital to deliver a service to a customer became available to us in the last five years.

Rebecca: It seems like forever.

Ed: Yeah.

Rebecca: I mean just to put it in context, because we've been looking at so many amazing technological advances in the past day here, and this morning especially, I think just the reality for us living in old, traditional media land is that these things have happened really slowly. I mean Evan can tell you that I've been saying this is the year of digital video since about '97, so this is the year it's really happening. I think with these devices, you really can do almost anything you want to from your living room remote or from your phone, it's changed the game for everybody.

Ed: Yeah, exactly. For us, we wanted to build a service that was really easy for business people to use. Content owners that are business people or technologists that need to meet performance goals, we did not want them to have to worry about the assembly as much as the outcomes. We focused our product that way and saw that demand in OTT, particularly over the last 18 months it's been the number one inbound driver for us. We do a lot of online marketing, a lot of content marketing, and we have people coming into every day who own content and want to sign up for our service. 90% of them are looking for OTT solutions one way or another, whether it's Apple TV or Roku, whether it's a native mobile experience, but delivered with subscription or transactional video, it's what's driving our business.

Rebecca: Which is great.

Ed: Which is great. We love it.

Rebecca: It's growing fast, which we love. In this world of so much content out there and obviously everybody here who's accessing all these apps constantly in their living room. There was just a piece in Vulture called "The Business of Too Much TV," it was like a couple of weeks ago it came out, and the data point they had in there was between 2009 and 2015 the number of scripted shows (TV-like content) has nearly doubled from just over 200 to 409 last year.

Netflix alone says it will produce 600 hours of original television, which they're probably going to go over, and spend five billion on programming, including acquisition. There's so much content being created, let alone all the YouTube influencers and rising Facebook Live stars, which we're going to be talking about after today. How much TV is enough? When do you feel like your OTT apps are what you guys are empowering and Patricia's creating, when does it become too much? When does the consumer decide they just can't watch it all?

Ed: We know with cord cutting trends there's $1,500 per year of consumer wallet up for grabs for services and you're going to spend half of that for your internet access. So you figure about $750 worth of content, that's what's available.

There is sort of a wallet share, and I think we should look at that for a sense of how much can a consumer spend or the supply chain through advertising afford to create. Apparently it hasn't been enough yet.

I don't think it's necessarily too much TV, I think there's just an endless appetite for premium content...There's no excuse to watch bad TV.

-Patricia Hadden, SVP of NBCUniversal

Rebecca: Not enough?

Patricia: Yeah, I don't think it's necessarily too much TV, I think there's just an endless appetite for premium content. I think it's more about how do we package and deliver it in a way that's convenient for the user. There's no excuse to watch bad TV these days, right? You can watch only what you want to watch and so TV's just part of our social fabric and I think if you go onto any social platform you'll see the conversations are typically around TV - except for now, they're political, because that's entertaining. Again, I think it goes back to who are the platform that are able to package and deliver them in a way that's convenient and that provides a little bit more choice for the user. I think Amazon's doing this incredibly well, because not only can you stream through Amazon, but they have these kind of add on subscriptions or niche genre services.

Now you can stream your videos, but then they have these add-on subscriptions, which are very specific to genres or to your individual tastes. While the audience is way more fragmented, they're also incredibly specific in what they want to watch and so you can see these kind of small, niche services or add-on subscriptions start to emerge that really speak to your individual tastes.

Steve: Yeah, I would just add - what we're seeing in the market is there is a proliferation of tons of content. I always get the question from the wife, “why do we have 863 channels when you watch eight?” She's right, but a lot of guys who want to watch sports don't want to give up ESPN and that's the big gorilla. That's our customer, ESPN thinks “we make a ton of money from the cable companies” and when they do the math, it just doesn't add up.

One thing that we try to help our customers with is using analytics to cut through what your consumers actually want to watch. When I get my subscription to the comedy channel, I can quickly use analytics to get me content I want to see. I don't want to shift through reams of libraries of content. Our customer wants to learn about that consumer and what they like and use algorithms in real time to figure out what Ed likes and what I like. Content is king and with that big of a library in the world of premium content, you need to be very smart about how you stick above and keep that consumer. For us, analytics are a big play in that.

Rebecca: Yeah, getting the eyeballs for the content is the key and for Patricia, you're flanked by awesome technology up here, when you set out on the Seeso mission - was a bigger challenge the content piece or the technology piece or the revenue profitability piece? When you guys looked out and like there's this huge opportunity, but what's the biggest challenge? What did you feel like it was?

Patricia: That's a great question. We're a content creator, so that comes very naturally to us. I think the way that Seeso approaches the program is very, very different from NBC proper. We invest in a comedian or a piece of talent and often times we'll go straight to series, as opposed to the kind of typical linear model, which is a pilot and then green lighting and that whole thing. We have much more flexibility in terms of our programming, but I think technology is always the backbone of the product, especially when you're doing kind of an OTT or SVOD service. We are using a great company called thePlatform, that Comcast partially owns.

Rebecca: Breaking the hearts of the guys on either side of you, by the way.

Patricia: I know, we've already talked about it, but there's other ways that we're going to work together.

The benefit of a product like Seeso, or any SVOD product, is exactly what Steve was saying, which is the real time analytics - the amount of data that we have and the ability to target based on your viewing behavior and based on your sessions. It just gets so granular and that is the key on how we look at what programming is working, what is the programming that is driving the most acquisition versus retention and we can really kind of slice it that way. It's incredibly important.

Rebecca: Steve, in your world of Ooyala, in that universe of hundreds of publishers are you feeling like technology hurdles are becoming easier for them to handle? We have all the content, we can do technology or are you still trying to win them over? I mean you're always trying to win them over, but from an understanding perspective.

Steve Davis, VP & GM East of Ooyala © Robert Wright, LDV Vision Summit

Steve Davis, VP & GM East of Ooyala © Robert Wright, LDV Vision Summit

Steve: Yeah, you're always trying to separate out, right? The technology hurdle, again, in the last 18 months has changed dramatically. The bigger hurdle is getting 10 people in a room to agree on what the OTT strategy means and is and we always try to help say the business case needs to be there for that technology. We can do SVOD, TVOD, AVOD, as you said, and these are all the things that get thrown out, but if you don't have a subscriber database already and you're already a freemium content provider, switching it completely to SVOD might not make sense right away.

Rebecca: Well, I think that, like we remind our clients all the time, if you're going to make an app, whether it's an app for Apple or an app for Roku or whatever, there are screens and screens and screens of apps. It's not just that you've chosen your target demographic of female Millennials living in coastal cities, per se, and you think that you're just going to find them if you create an awesome app and you leverage this technology or some others, because there are so many out there. What are you doing to actually market to them and how are you getting them, whether it's social or other tech? I mean Zype, I think, answers some of these questions, right?

Ed: Yeah, and we try to educate our customers who have only worked within a traditional distribution supply chain have relied on others to market. They have relied on others for discovery and promotion and part of it is this ecosystem is teaching these content owners how to market, how to promote, how to discover, how to do CRM. The fact that they can know who their viewers are is a big deal.

It sounds simple, but it's a really big deal for them. We have tools that when someone cancels or we think they're going to cancel, because they haven't logged in a while, it gives them some alerts and then they can plug into MailChimp or some other email service and do something about it; they can use the ecosystem of software that's available out there.

Rebecca: Actionable data.

Ed: Actionable data.

Rebecca Paoletti, CEO of Cakeworks, Ed Laczynski, CEO of Zype,  Patricia Hadden, SVP NBCUniversal, Steve Davis, VP & GM East of Ooyala (L to R) © Robert Wright/LDV Vision Summit

Rebecca Paoletti, CEO of Cakeworks, Ed Laczynski, CEO of Zype,  Patricia Hadden, SVP NBCUniversal, Steve Davis, VP & GM East of Ooyala (L to R) © Robert Wright/LDV Vision Summit

Rebecca: Love actionable data.  

Ed: And do something with it and so the engagement data is really important, but also the subscription metrics and all that stuff is what they really care about, that's revenue. We often talk to our customers about having that business strategy up front, have it decided. If you've never done this before there's budgets you need to put together. While we, I think, are offering a service at a very disruptive price point, there's still going to be costs and marketing and promotion, discovery. This isn't just hit a button and all of a sudden you'll have a million subscribers tomorrow. It's really hard to build subscription businesses of any sort.

Rebecca: Okay. Wait, we're all building within organizations that have to stay profitable, so nobody has the opportunity to just invest in OTT, like it has to make money out of the gate. My last question is, are we living in app world? Thank you, Steve Jobs, for putting us there in the first place.

Ed: I think we are. I think that not only us of a certain age demographic on this panel, but as we're minting more consumers every day, they live in that world. As human beings in the modern era we're all trained to use apps, purchase through apps, subscribe to things, buy things, and I think the genie's out of the bottle on that, so absolutely.

Rebecca: Yup. A universal agreement. Okay, so audience, are there any questions for the panel? I have dozens more.

Audience Question 1: Good morning and thank you all. Interesting panel. Adaora Udojo. Rothenberg Ventures. Question to each of you, thoughts on what incumbents or legacy media companies are doing well with their digital strategy and figuring out how to integrate not only the technology, but also the strategy, as you mentioned, Steve?

Patricia: I've been incredibly impressed. I've only been at NBC for a year, and the fact that NBC and Comcast are investing in companies like Vox and BuzzFeed really speaks to being very open to being part of the digital conversation, which typically you don't see networks participating.

Steve: Probably not a popular thing to say, I don't think any of them are doing it wonderfully. I mean the closest that came out, but also has had tons of issues, HBO and HBO Go and HBO Now. They're on the right track, but even as a consumer, I mean I moved to HBO for one reason and it was Game of Thrones, right? You want your fix, you want GoT, even if it has had glitches, hiccups, all that stuff. I would say they're a leader. Leslie Moonves of CBS, he's out there every day. As a leader position, I would say CBS and every statement Les makes is about it...he gets it. He's going direct to consumer and he doesn't care about what the cable companies are going to make him or make him not do.

Again, in the pragmatic approach, you're looking at subscribers of their Showtime channel and everything else. If you look at the revenues and you look at how they're really doing, there isn't a single, large, top 10 media company, who's absolutely killing it. The guys who are killing it are the digital companies, who may not have had the old way of thinking and there's a number of companies you can talk about, like Toca Boca - if anyone has kids, it’s kids apps. There isn't a kid who 6-12 years old who hasn't downloaded a Toca Boca app, right? Those are the guys who are killing it, because they're not beholden to old approaches and old way of thinking. The sky's the limit still for all of the old time kind of media companies as they are.

Ed: I'd agree. I think that to give some faint praise for a huge media company, who's done something with a popular thing that fans really care about, is the fact that you could buy Star Wars Episode VII through EST on an Amazon Fire device. To me, it was like, okay, that door's opening there, where they let that happen. Disney was a company that sold limited VHS tapes and you can only get that tape for six months back in the day. I think that it will change, but I would agree that none of them are killing it, except for Patricia.

Rebecca: Oh yeah, next question.

Audience Question 2: I'm Michael Cohen, Facebook. Looking out into the future, I'm wondering what you think the role of Live will be in the entertainment market?

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Steve: Ooyala was one of the four companies named at the F8 Facebook keynote speaker event, so we were first to market with our live platform being fed directly to Facebook Live. With that said, social is ruling ... Again, 18 months ago people were asking “what are we going to do, what is OTT, we don't even know?” Now it's not good enough just to have a live stream and an app, it's “how do I get it to Facebook and actually capture my audience in Live?” Look at the messaging and change - that's what happened in 18 months! For me the number one, two and three things that come up are apps, which apps, and how to work with different social channels.

Rebecca: The only thing I would add there is live hasn't been able to be monetized until really recently. This is a thing, like we were streaming live, we had the capabilities to stream live, but you couldn't put an ad, you could barely run an ad on the same page as a live stream. There was no way to make money, so we're all bearing these huge bandwidth and hosting costs for live streaming, but you couldn't actually do anything with it. Now, that's totally changing. You can clip out. You can insert brands. You can do products. All of these things.

© Dean Meyers, VizWorld/LDV Vision Summit

© Dean Meyers, VizWorld/LDV Vision Summit

Ed: We believe live is huge for us. This year we're seeing almost every new deal we do has some sort of live component. Our most popular subscription business customers - these are either broadcast natives, a talent that came from like a Sirius Radio or a morning TV - are now doing their own daily show or politics or news, it's all around live. They're all monetizing it with subscriptions and pass plans, like buy five days of content. We're also seeing sports as being a big driver for live.

Much like ESPN started with niche sports, like darts and bowling and stuff like that. We just live streamed the U.S. men's polo championship and they had 12 drones in the air and feeding it through a video switcher on a truck onsite through our platform and then out. They did really well with it. Live is going to be really big and I think the social platforms are the discovery platforms for live. That's where you're going to engage that audience and then the trick is how do you get them to monetize outside that social platform in a meaningful way?

Rebecca: Or pay for it, right? Consumers will pay for a lot, definitely will pay for a lot of access to live, especially when it comes to sports.

Patricia: We actually have once a month we do a live streaming comedy show from the Barrel House and we just send out a quick email before, and we do see an uptick in engagement and an uptick in subscriptions right around that show. I agree with everyone, live is not going away. I think there is still that draw and so we're going to continue to do that.

Audience Question 3: Hi guys. My name is Brian Storm, I run a small media company called MediaStorm. I'm curious what you guys think about really independent, small niche companies. Can we play in this space? I mean I've got a five, six person company, we do 30 films a year, we have a 150 countries hit our site, but we're not on any of these devices yet, because it's just so hard. Are you guys building a solution for niche, independent media players? I used to work at NBC and things were easier, but now I don't and it's harder. Can you meet our needs?

Ed: We do. Our platform could work for you, so we should talk after the meeting.

Steve: I was just going to answer, for us it's not what we would build. Our platform is for customer of ours like an ESPN or Vice and that kind of thing.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Audience Question 3: The big boys who have big money. We're a small, little pain in the ass.

Steve: There's no question, that is our market, but what I was going to say is, there's so many levels of solutions, you have to find the one and if Ed's company is the one that fits, then that's perfect. You don't want to waste your own time talking to companies like ours, because we will lose money working with company like yours. There are tons of companies on the smaller end that would be a great fit for you, so you that's why you want to kind of get through that.

Rebecca: The great thing about now is everybody can be a creator and everybody can have an app on any device, it's just the reality. Even the smaller influencers, I only have a million subscribers and not a 100 million, there's really room to play there. You're being modest, your content is awesome.

Audience Question 4: Hi, I'm Jacob Loewenstein, I run MIT's VR and AR community, formerly of BuzzFeed. As a "Millennial," I'm a little confused about the use case for live. I get watching live sports and wanting to be a part of that experience you can only consume while it's happening, I get wanting to interact with an influencer, because this is someone I think is cool and I want to talk to that person. But beyond that, the use case for live doesn't really fit, at least for me personally, with how I consume content, especially if you're talking about OTT and Netflix and the idea of like curation. I guess I'm just confused about if I'm scrolling through my social stream am I really going to stop to watch something live outside of those two use cases?

Rebecca: Is that because it feels inconvenient to you?

Audience Question 4: Inconvenient, not curated necessarily in the moment to what I actually want, I can't develop my own sort of lineup of things to watch in a sequence. It doesn't fit some of the other trends, at least I feel like, I'm seeing and how content fits for people like me.

Rebecca: Patricia, first, then I know Ooyala and Steve you have a lot of data around this, and, Ed, you're starting to, but Patricia, when you guys set out to do live, you definitely see engagement.

Patricia: Well the reason we set out to do live is because the service that we created is specific for a comedy nerd, right? While we're very lucky, people who live on the coasts, we have access to comedy clubs, there's this whole kind of middle part of the country that may not necessarily have access that we have, and so what we're seeing is people want to be part of this comedy club. They actually want to be part of the audience, whether it's virtual or actually sitting in the stands watching, or in the audience watching, a comedian perform.

There is an energy exchange that is going on, so that is what the intent behind what we're trying to do with our live comedy stand-up and it seems to be working. It's really interesting to see, again, back to data, but it's really interesting to see the engagement across the country, and exactly to your point, we see on the coast people watching it on demand. They're going to watch it when they're going to watch it, and we see kind of everyone else, they're going to watch it live.

Ed: Imagine if you're a gamer and you use Twitch, you're watching live gaming, taking that same concept, we have customers that are doing live viewing parties, so everyone's kind of chatting and talking about what they're watching and because it's meaningful for them. It's a way to break through the mold, where live used to be only way you could consume a content. In traditional broadcasting it was all live. Now it's sort of an exceptional way to maybe drive some additional marketing or draw or interest from activated consumers that really care about that content.

Rebecca: That's a great question. Are there any other questions?

Audience Question 5: Hi, my name is Christina, I'm the CEO and founder of Seena Books and we are into the consumer behavior measurement. Steve, you were talking about analytics and the importance of this, how are you measuring engagement and what are the current trends that you see on this? For example, some companies are already integrating facial recognition of emotions, for example, to integrate at another level of engagement, and what are you doing in terms of that?

All the analytics we do is within video and we can pull in data from other places, but it's always around video.

-Steve Davis, VP & GM East of Ooyala

Steve: That's a great question. For Ooyala, specifically, it's all within video. All the analytics we do is within video and we can pull in data from other places, but it's always around video. Then within video it's broken out into a million different segments. Is it a live stream? Is it VOD? Is it by app or device? Is it by geography? Then when you get into engagement that opens up an entire other universe of metrics. Engagement to our publishers, though, typically leads back to revenue, it's all about driving the business. The analytics we offer, facial recognition could be the coolest thing in the world and emotional and if it doesn't drive revenue, typically, our publishers don't care. I'm not saying it's not valid for someone else.

For us, they want to tie back in this video did the consumer drop off after we added a mid-roll, and if so, we wanted to keep that consumer all the way to our post-roll. Did we put too many adds in? Should we have no adds in? That's what they're trying to figure out, so when we talk analytics and video, it's in real time, it's algorithms, and it's getting discovery and content recommendation, so in an ad-based world you get that consumer to click one more video and those are all the pennies dimes that add up for our consumers. That's the analytics we're looking at. In a SVOD world you're trying to reduce churn; how do you keep those subscribers on your system? All right, so that's what we mean by analytics. The word analytics is funny, but within video it can take off. We have customers who are click to buy, they're retailers, so the analytics that are important them.

North Face has a video of a guy climbing up and you put your cursor over the backpack, they want to get that backpack to the shopping cart. It's not ad-based, but it's tied to revenue at the end of the day. A lot of it leads back to how do you help your business with video, and it's funny, the live question over here from the gentleman from MIT, our data follows exactly what he just said about, which is Millennials will sign up for VOD, unless it's a BlizzCon event, then they've got to watch that live.

Patricia: Well, I was going to say, I think it's a great question, and it also really speaks to there's no standardization of digital analytics, which is what we're all kind of just circling around. We do the same exact thing. We have our typical KPIs are around engagement and retention and churn and how to mitigate that. But what we're also thinking about, and at Seeso we talk about a lot, is how do you quantify customer delight and how do you get to a better laughter score? We've kind of created this algorithm internally, where we take these four pillars, curation and time to choice and brand equity and shareability, and measure ourselves against a competitive set to see if we can actually get you to laugh more, to laugh better. Again, all this to say that there's this space available where it's not a Nielsen rating, where we're still trying to figure out what is that digital standard.

Rebecca: Thank you. Thank you. All right, we're done. Now, lunch?

The annual LDV Vision Summit will be occurring on May 24-25, 2017 at the SVA Theatre in New York, NY.

Computer Vision Will Disrupt Many Businesses Especially Manufacturing and Consumer Shopping

Howard Morgan, Founding Partner at First Round Capital & Director at Idealab

Howard Morgan, Founding Partner at First Round Capital & Director at Idealab

[Reposted with updates for our next annual LDV Vision Summit on May 24-35, 2017 in NYC.]

Howard Morgan is a founding partner at First Round Capital and director at Idealab. He began his career as a professor of decision sciences at the Wharton School of the University of Pennsylvania and professor of computer science at the Moore School at the University of Pennsylvania from 1972 through 1985.

We are honored that he was one of ~80 expert speakers at our 2016 Annual LDV Vision Summit. This post was the virtual start of our fireside chat with Howard virtually and you can watch the extended live version here

Evan: First Round Capital “FRC” has invested in many visual technology businesses that leverage computer vision. In the next 10 years - which business sector will be the most disrupted by computer vision and why?

Howard: Both the manufacturing inspection area, which has had various types of visual tech over the years, and the consumer sectors, particularly shopping, will be disrupted. You will be able to shop either by asking Alexa for something, or showing it to her - or her equivalents with a camera.

Evan: There are many AdTech companies in FRC's portfolio. Several sessions at our LDV Vision Summit will cover how computer vision is empowering the advertising industry with user generated content, to tracking audience TV attention and increasing ROI for marketing campaigns. What intrigues you about how computer vision can impact this sector?

©Peter G.Weiner Photography

©Peter G.Weiner Photography

Howard: Our GumGum investment, along with our investment in Curalate, both make heavy of use of computer vision to determine and target users in the visual sectors.  People will be shown contextually relevant ads or additional information based on the pictures they’re looking at, or creating on the various social media platforms.  This will get much more specific and lead to directly shopping from images - something that’s been tried but has been hard to do well with mediocre image recognition.

Evan: FRC has a very unique approach to celebrating the New Year with your great annual holiday video. It seems like a successful video marketing initiative. How did that idea originate and what was the goal? After years of doing this holiday video - what is your advice for your companies that wish to create an annual video marketing initiative.

First Round Annual Holiday Video 2013 "What Does the Unicorn Say?"

First Round Annual Holiday Video 2013 "What Does the Unicorn Say?"

First Round Annual Holiday Video 2015 "This Time It's Different"

First Round Annual Holiday Video 2015 "This Time It's Different"

Howard: Josh and the team decided we wanted to have fun with our companies, and show the power of the First Round Network as one which not only has high performing companies, but also great fun. And it was our way to feature the companies, and not just our partners in our holiday greetings to the world.  Our advice to those who want to do an annual video marketing piece is to be creative - we have chosen the parody, but there are lots of other ways to be creative.

Evan: You frequently experience startup pitches. What is your one sentence advice to help entrepreneurs improve their odds for success.

Howard: Get really good at crisply telling your story - why the product is needed, and how you’re going to make money with it.

Evan: What are you most looking forward to at our LDV Vision Summit?

©Heather Sullivan

©Heather Sullivan

Howard: LDV Vision Summit has a great view of the future of vision and related technologies. I always want to be learning about the way early technologies that will impact us over the next decade, and LDV is where I hope to find some of that information.

Howard Morgan spoke in a fireside chat with Evan Nisselson at the 2016 Annual Vision Summit. Our next LDV Vision Summit will take place on May 24 & 25 (Early bird tickets at 80% discount available until March 31).

Other expert speakers at our 2017 Summit are from Union Square Ventures, Facebook, Google, IBM Watson, Twitter, Pinterest, Lyft, Twilio, Glasswing Ventures, Cornell Tech, and many more...

Entrupy Secured over $8M in Inventory for Louis Vuitton, Gucci and Others after Being a Finalist at the 2015 LDV Startup Competition

Vidyuth Srinivasan, Co-founder and CEO of Entrupy, Inc ©Robert Wright/LDV Vision Summit

Vidyuth Srinivasan, Co-founder and CEO of Entrupy, Inc ©Robert Wright/LDV Vision Summit

The LDV Vision Summit is coming up on May 24-25, 2017 in New York. Through March 31 we are collecting applications to the Entrepreneurial Computer Vision Challenge and the Startup Competition.

Vidyuth Srinivasan, Co-founder and CEO of Entrupy, Inc was a finalist in the LDV Vision Summit 2015 Startup Competition. Entrupy’s goal is to enable trust in high-value goods transactions by providing on-demand authentication solutions to every business. They now support luxury brands such as Louis Vuitton, Chanel, Gucci, and many more. Vidyuth gave us an update on Entrupy since their stellar performance at the Startup Competition:

How have you and Entrupy advanced since the LDV Vision Summit?
We have since launched a product, acquired over 100 paid business customers, expanded our team to 10 (+4) and signed partnerships with large marketplaces.

What are the 2-3 key steps you have taken to achieve that advancement?
Focus on product, demonstrating value of the product to our customers, and building strategic partnerships to demonstrate value at scale.

What is your proudest accomplishment over the last year?
We helped secure over US$8M worth of inventory for our customers in 2016 by authenticating and assisting in the sale of high-end handbags with our on-demand device. In the process, we helped remove over US$1M worth of fake goods from the market. We also rapidly grew our customer base to over 100 paying business customers.

Vidyuth Srinivasan ©Robert Wright/LDV Vision Summit

Vidyuth Srinivasan ©Robert Wright/LDV Vision Summit

What was a key challenge you had to overcome to accomplish that? How did you overcome it?
Scaling up our data collection and launching more product to customers was a challenge. We overcame it through focusing on the right data partners and being patient about our learning before launching more product support. It helped us avoid a ton of pitfalls in launching too early and helped us gauge product experience even when we deliberately added lag into the user experience.

What are you looking to achieve in 2017?
Scale our customer base, make the product experience so compelling that they think it's magic!

Did our LDV Vision Summit help you and Entrupy? If yes, how?
The Vision Summit helped us meet some good folks in the Vision field and also put in perspective the different ways to solve vision problems.

What was the most valuable aspect of competing in the Startup Competition?
We received very specific feedback on which parts of the business to focus on and which secondary details to leave out. This led to a lot of refinement of our ‘story’. Also, we were asked fairly specific questions which helped us realize where we needed to have stronger answers (which meant doing more homework).

We ended up having some abstract conversations on the technology and application with technical experts, which helped reinforce the importance of what we’re doing and where this could go.

What recommendation(s) would you make to teams submitting their companies to the LDV Vision Summit Startup Competition?
Be clear, concise and lead with the best thing about your startup. Your pitch isn't your entire company, it's your screenplay for a great movie. Leave things that are 2nd order details and be choosy about what adds clarity to help people understand/appreciate your company more.

What is your favorite Computer Vision blog/website to stay up-to-date on developments in the sector?

Applications to the 2017 ECVC and the Startup Competition at the LDV Vision Summit are due by March 31, apply now.

The Conscious Home is Achievable in the Next 15 Years: Your home will leverage visual sensors to be smart enough to understand what you want or need

In no order: Jan Kautz, Director of Visual Computing Research at NVIDIA; Gaile Gordon, Senior Director Technology at Enlighted; Chris Rill, CTO & Co-Founder of Canary at the 2016 LDV Vision Summit ©Robert Wright/LDV Vision Summit

In no order: Jan Kautz, Director of Visual Computing Research at NVIDIA; Gaile Gordon, Senior Director Technology at Enlighted; Chris Rill, CTO & Co-Founder of Canary at the 2016 LDV Vision Summit ©Robert Wright/LDV Vision Summit

Join us at the next LDV Vision Summit. Visual Sensor Networks Will Empower Business and Humanity panel discussion is from our 2016 LDV Vision Summit.

Moderator: Evan Nisselson, LDV Capital with Panelists: Jan Kautz, Director of Visual Computing Research at NVIDIA; Gaile Gordon, Senior Director Technology at Enlighted; Chris Rill, CTO & Co-Founder of Canary

Evan Nisselson: Thank you very much for joining us. You heard and saw my recent article about Internet of Eyes being larger than IoT market opportunity. Before we jump into that, I'd love to hear two sentences from each of you. What do you do at your company and what does the company do? Gaile, why don't you take it away.

Gaile Gordon: Okay. So, I'm at Enlighted. Enlighted introduces dense sensor networks into commercial buildings through lighting. We introduce dense sensor networks into commercial buildings through lighting. We use the sensor networks to control how much energy is used by the lighting, which is what pays for that venture, but it also produces interesting data sources which are used for HVC control and space planning. I've been there since January and I work with the CEO on the next generation of sensors and applications that run on that network.

Jan Kautz: I lead the visual computing research group at NVIDIA. That is to say I do the research which my team on computer vision and machine learning. NVIDIA probably doesn't need a lot of introduction. You all probably know graphic cards are what we're known for. Recently, we use them more and more. Generally, we sell them to all types of markets, including self-driving cars to cloud and so on.

Chris Rill: I'm one of the founders and CTO at a company called Canary here in New York City. We build devices that protect your homes and other environments. We've packed an entire security system; an HD camera, microphone, and light safety sensors into a device the size of this bottle of water. We connected ones to AWS and we send you alerts when we detect anomalies in your environment. You can control this system all from your smartphone.

Evan Nisselson: So, one of the things that I love is a very diverse panel from three totally different perspectives and that can be challenging, but there's also going to be a lot of synergies. Gaile, why don't you kick it off and tell us a little bit about what's a smart commercial building?

Gaile Gordon: The primary thing that makes a building smart is that is has sensors. It has a way of reacting to what's going on in the environments. Using, for instance, where the people are in the building to control at a very fine degree how much energy is used by the lighting and also how much light is coming in from the windows to change the dimming levels, etc. But more than that, the source of data that you use to study what the behavior patterns in the building are, which is really interesting thing for other applications.

Evan Nisselson: For what?

Gaile Gordon: For space planning, for instance. So, to make sure that you're using the building to its best efficiency. That your conference rooms are being used, that they are sized appropriately. Then, I think going forward, there's going to be a lot of new interesting things that we can do with the network that's already there. Pay for it through the energy savings. To do things like, active tracking, indoor location, lots of really, really, interesting things that I think people will find valuable to their daily lives.

Evan Nisselson: What's the main ROI for a building to start using these sensors?

Gaile Gordon: The energy savings that is introduced is about 75%.

Evan Nisselson: Okay.

Gaile Gordon: So, it's a no-brainer.

Evan Nisselson: Right.

Gaile Gordon: As opposed to previous applications which studied where people were in the building, which had to first pay for putting that network there and sometimes that was obvious in terms of its value and sometimes it was not. But in this case, it's completely obvious. The money is immediately covered through the energy savings. Which gives really interesting business model to figure out what else to do with the platform that's there.

Evan Nisselson: Right. Chris, take it away from there. Evolution of the sensors in commercial buildings, you're obviously more focused on the home. Tell us a little bit about some of the sensors, in addition to the camera, that are on the Canary devices.

Chris Rill: Sure. When we started designing Canary about four years ago, we knew that video and audio is so important because of our smartphones and the fact that we have been looking at these really crisp images. We wanted to give a better picture of what was going on within the home. That's really the things you can't see, especially when you're not in that environment. So, the temperature, humidity and air quality were really important for us to really understand the context of that environment. For situations where you're monitoring, say an elderly parent or a child, understanding their comfort and understanding the well-being of that environment goes far beyond video and audio. That's one of the reasons why we included those additional sensors. So, that you were almost telepresence in those environments that you were monitoring.

Evan Nisselson: So as a co-founder, why did you start this business? I'm sure there's been some surprises in the technology challenges. Why'd you start it, in a short sentence or two. What's the biggest surprise once you started connecting all this data, because it relates to the signals that are either useful or not.

Chris Rill: For me, it comes from personal experience. About six years ago, my apartment was broken into while I was living abroad. I sought a system to monitor my apartment. Like any engineer, I went to the store to look for something that I could put in my home and it was a local hardware store. There was nothing that you could simply buy and place in the corner of your home. So, I bought these sensors you stick on the windows and doors and I hacked together a camera and a server in the cloud. That was my security system.

Today, companies like Canary are enabling everyone to do that. Not everyone's an engineer. Not everyone has the resources to go and put that system together. That's one of the reasons why I got connected with my co-founders, Adam and John, and why I'm so passionate about using technology to understand what's going on in environments, because I forever have that baggage because of the trauma in my life.

On the question about surprising; I would say it's been so surprising to see how hard it is to build a consumer product company. Not just a device, but to have it be available and working at the quality it needs to work all day, every day. I think that's something, especially, something I'm passionate about, which is security. Which is securing these products at scale, because, I'm actually terrified of the internet of eyes. Because I know it's not just about the algorithms, but it's also about the security of the information that these algorithms are analyzing.

Evan Nisselson: Jan, you're kind of perfectly sitting on the panel in-between both of these opportunities. I liked how you orchestrated that. Tell us some use cases that you're working on today that will relate to both of these. And maybe one that you're most excited about in working on with your team.

Jan Kautz, Director of Visual Computing Research at NVIDIA ©Robert Wright/LDV Vision Summit

Jan Kautz, Director of Visual Computing Research at NVIDIA ©Robert Wright/LDV Vision Summit

Jan Kautz: We recently started working on, which really relates to both of these cases. Using visual sensors to do activity and action recognition in videos. One of the newest cases at home, might be, you have lots of cameras in your home and although you might be able to monitor what's going on remotely, for instance you have an elderly parent that you care for and you're not at home. Your parent falls, you wouldn't know, but your sensors are able to recognize that your parent fell. Call the ambulance directly. Those are use cases that wouldn't be possible otherwise. Those are the things that I think will make a big difference in people's lives in the future.

Evan Nisselson: Is that something that's possible right now? Or are we talking five years out or eight years out? Without giving away the secrets. I mean, how soon is this? Because Chris is smiling. He says, "I want to use that."

Chris Rill: When can I have it?

Evan Nisselson: Exactly.

Jan Kautz: It's a question robustest, right.

Evan Nisselson: How do you define robustest?

Jan Kautz: How many false alerts are you willing to deal with?

Evan Nisselson: So, Chris, how many false alerts are you willing?

Chris Rill: Oh, man. Well, as an industry, security is 99% wrong. So, there's a very low bar that you need to meet-

Evan Nisselson: So, you want it today? That's what you're saying?

Chris Rill, CTO & Co-Founder of Canary ©Robert Wright/LDV Vision Summit

Chris Rill, CTO & Co-Founder of Canary ©Robert Wright/LDV Vision Summit

Chris Rill: Yeah, today. All joking aside, we've seen with the application of computer vision, that we've been able to reduce false alarms by up to 80% and it continually gets better as we get better labels and better data to train our models. So, I 100% agree that false alarms, with that, the first time you freak out because your mother, you think your mother or father falls, you're going to cry wolf and eventually going to shut the system off. That's what traditional security has kind of been having to deal with for years.

Jan Kautz: There's still some way to go.

Evan Nisselson: Okay. Give us another use case while you're on the hot seat. Give me another use case that you're really excited about.

Jan Kautz: I think the other one is self-driving cars. I think that's going to be a big use case for sensors. Sorry, it's not just visual sensors, wherein your case, there will be additional sensors on cars. But it will be a big change to the way we live as well. In 10 years time, everybody will have-

Evan Nisselson: Okay, so today most of the cars that are on the road, how many sensors are in the car?

Jan Kautz: There's still a lot of sensors.

Evan Nisselson: Roughly, what do you think?

Jan Kautz: There's radars, ultra-

Evan Nisselson: Are there 10, 20?

Jan Kautz: 10.

Evan Nisselson: Okay. In 10 years or 15 years, how many will be in the car? Or just one controlling more?

Jan Kautz: No. There will be more sensors. There is disagreement which ones and how many. There will be more, but which ones and exactly is unclear. I'm betting on cameras and radar. Not LiDAR, because I think they make your car look ugly because they're big and bulky.

Evan Nisselson: Just because of a look factor?

Jan Kautz: No one wants to put a big spinning LiDAR on top of a car.

Evan Nisselson: Right, right. Okay.

Gaile Gordon: It doesn't need to spin.

Gaile Gordon, Senior Director Technology at Enlighted ©Robert Wright/LDV Vision Summit

Gaile Gordon, Senior Director Technology at Enlighted ©Robert Wright/LDV Vision Summit

Jan Kautz: It doesn't need to spin. Right!

Chris Rill: There was a Kickstarter campaign that did a small LiDAR unit about $200. I'm not sure how it did. Anyone back that-

Evan Nisselson: There's a bunch of them working on it.

Gaile Gordon: I think it's safe to say there will be 3D sensors on the car.

Jan Kautz: Yes. Which form they take is the question.

Evan Nisselson: What are the options? Cause a technical audience here, so. A technical first day.

Jan Kautz: It could be stereo cameras, that's one way. LiDAR is the other way. We might just need more of them if we don't have a spinning LiDAR, but you could do that.

Evan Nisselson: So, Gaile talk to us a little more about the technical side for the very technical folks in the audience. Because it is our first technical day. What is the capacity of the sensors you have now? Talk to some of the challenges that you're seeing today and in the near future.

Gaile Gordon: So for Enlighted, the sensors that we have now are relatively simple sensors. They're based on thermal data, ambient light, temperature. Things like that. They're already quite powerful. But the challenge's that we see, lighting has to be extremely interactive and all of the competition that you're going to be doing is local. So the big challenges are doing the processing locally, so that when you walk into a room, it reacts quickly. When your network is down, if you happen to have a network interruption, your lights are still working. So, that's processing on the edge, is probably the biggest issue and security, I think is the next one. You don't normally think of hacking lights, but that can have very broad impact. You don't want your smart building system to be hacked, but you also want people to be able to have personal control over their environment, and so the push-pull there is another huge issue that we have. I think getting more advanced sensors into our networks at a cheap enough and pervasive enough level is the next challenge.

Evan Nisselson: You're sitting very close to someone that might be able to influence that. What would you ask for?

Gaile Gordon: Well, it has to be cheap.

Chris Rill: A discount.

Gaile Gordon: I think that's the primary issue, right? The price point that we're currently at, you'd have to be able to form at least the tasks that you can do today, better.

Evan Nisselson: So, just for perspective, how many sensors are on the Canary? How many visual and how many total?

Chris Rill: So we have about, well it depends on what you consider visual.

Evan Nisselson: Your definition.

Chris Rill: Camera, so we've got one.

Evan Nisselson: Okay, so anything that can see thermal, see anything that is not actually a camera as we know it.

Chris Rill: Yeah, correct. We have one camera and then first thing, we've got many different sensors. We have about half dozen of sensors that we use. Some of them we use for user value. Like our temperature and humidity. But also, ambient light sensing so we can turn on night vision and other such features. But the main one's that the user can see are CNR, APAR, the camera, temperature, humidity and air quality.

Evan Nisselson: Talk to us a little bit about how do they talk to each other? Are they triggering actions locally on the device? Are they communicating back with the cloud? How is the smartness connected?

Chris Rill: Sure. Today we leverage the sensors on device to change the internal state of the Canary device itself. We have algorithms that interpret the visual data to try to understand at a very low compute, because the Canary device only has about, since we're talking tech today, only has about 400 megahertz of computational power. So you know, you can't really do a lot there.

So, we try to understand what's going on in the Canary device visually and then we upload that to the cloud for further contextual analysis to try to understand whether we have people, pets or just background noise like a shadow or a light change. Then, from there, if we do detect that there's something of interest for you, we will send you a notification and let you know what's going on.

Some of the other sensors that we use that are really about understanding the environment are temperature and humidity. If the temperature dips or the temperature spikes or the humidity changes more than between 30 to 50%, which is really the comfort zone for the home, we'll send you a notification to say, "Hey, we see that your humidity is low." From an air quality standpoint, that's another whole bag of worms. What is air quality? Is it a pollutant? Is it carbon dioxide? I'm sure people in offices really want to understand what air quality is. So, that's really a qualitative sensor for us to really understand. Is the air changing? So, things like cigarettes and cooking actually influence that. But today, it's still really-

Evan Nisselson: Cooking in the home, you mean?

Chris Rill: Oh yeah.

Evan Nisselson: When a steak smells really good it effects Canary?

Chris Rill: Yeah, I'll get an alert sometimes that says your air quality is very bad. Which, for me, is actually a good thing.

Evan Nisselson: So that brings up the perfect segway of will it tell you when that steak is ready? Medium-rare.

Chris Rill: We're working on it! R&D. One day! There are connected frying pans, though.

Evan Nisselson: So, that's kind of where, you know, that's the interesting things that I think the connection of different signals, not only from Canary, but from the phone and other things that maybe interact with Enlighted or other companies. Where the APIs that we are used to online are then going to become more of the APIs of internet things and internet of eyes. Give an example of what would excite you.

That's the opportunity, really, is the synergy for playing Bingo today. The synergy between all of these different signals. Today I would say we don't do a really good job at integrating all of the different signals in your home and all of the different signals that are publicly available to add the right context.    

-Chris Rill

Chris Rill: That's the opportunity, really, is the synergy for playing Bingo today. The synergy between all of these different signals. Today I would say we don't do a really good job at integrating all of the different signals in your home and all of the different signals that are publicly available to add the right context.

One thing that really excites me that, I don't know if I've talked about this publicly, I have talked to some of you about this, but it's using the excel orometer on Canary. I believe that we will have the largest seismic activity detection system in the world because of the number of units we have deployed, but we haven't yet started looking at that excel orometer data and correlating that with the seismic readings that we get from the different government agencies around the world.

I would say that from an interactive perspective with all of these other companies, we have an integration with Wink. They're an OS for the home that you can control your lights and your thermostat and add a few different sensors. We have an integration with them and surprisingly there are a lot of people who have started to integrate with Canary and with Wink. That's something that we're not super focused on because we feel like the opportunity for us is to empower people with data. Meaningful insights from the environment around them. The control of things is really just a small tangential niche that we're eventually going to get to, but there are companies like Apple and Google and Samsung. They are all fighting over your light switches and your refrigerators and all those other things in your home that you may or may not have. The information in your home, everyone has that and everyone should have access to that so that they can make decisions about the changes that go on.

Evan Nisselson: Gaile, similar to Chris, give us a perspective in the photograph that was in my presentation that came from Enlighted, which is the heat map of the office. A perspective of how many sensors are there that you guys are part of the network and is that going to exponentially increase or are sensors, the next question shortly you'll hear, is it going to decrease but have more power? So you don't need to have zillions. So, give us a perspective.

Gaile Gordon: The Enlighted network is basically spaced every 10 feet. So, there's a sensor package every 10 feet in a grid around the facility. Because every fixture essentially has a sensor package in it, so it's very dense. However, there are other things in the office as well. An RF is one of the big things that is also intriguing. We're here talking about visual technology, but we also have to understand that the sensor fusion solution will be very interesting. You can do a lot of things that traditionally the cavision community had tried to do only with vision sensors, but now that everyone is carrying around a very powerful computing device with radios on it, a lot of things like indoor location can also be either done entirely or augmented with RF as well.

Evan Nisselson: Just as an example for those that are not technical in the audience, when you say sensor packages, what does it looks like? Is it a little box? Can we see it? Is it very small, how-

Gaile Gordon: Yeah. The Enlighted fixtures you can barely see it. It's probably the size of a dime, is what you see. It's not very noticeable.

Evan Nisselson: I assume they're going to get smaller. Which brings me to Jan with the question of: When will these sensors be painted on the wall? So you actually can't see them, so that I'm sure, Gaile's company would love to network the paint-, wall with thousands of sensors inside of them and they can actually talk to Canary.

Jan Kautz: Well, as you mentioned, Apple patented the 2 millimeter camera, so you can place lots of cameras if they're the size of 2 millimeters. You can easily make cameras the size of a dime and place them all over your house. It will be possible. There are questions about security that becomes a real issue at that point. Do you want to stream all this data to the cloud? I would be very hesitant to stream from my hundreds of cameras that are in my house to the cloud.

Evan Nisselson: Okay, let's talk about it for a second. It's the big issue. What's the difference between streaming two cameras or a thousand cameras? Aside from the, on a security level? It's either secure or it's not.

Jan Kautz: The more cameras you have the more they see, right. If you place it just in your living room-

Evan Nisselson: It's less about the vantage point of how many you have. So, if you know where your sensors are or cameras are in the house you can do the things you shouldn't do in a different room. Your issue is, if they're everywhere-

Jan Kautz: If they're everywhere, if I want a smart house, right. If I want to see everywhere in my house so I won't have any blind spots. I don't want my elderly parent to fall in the one blind spot in my house.

Evan Nisselson: Right. So you can't have both. You have to have them everywhere-

Jan Kautz: You process it at home.

Evan Nisselson: Is that going to happen?

Jan Kautz: I think so.

Evan Nisselson: You think so, Gaile?

Gaile Gordon: Yeah, I think it always-

Evan Nisselson: So, what if you processed at home and kept at home, in theory, is safer, which it really isn't if it's connected to the internet, versus on the cloud which is the same thing?

Gaile Gordon: Well, I think one of the things we've gotta think about is the data ownership. Who owns the data and I think in 10 years you might see the camera as not being in the environment but on you. Because when you own it, when you control the cameras you own the data and then the question is a little bit easier to ask.

Evan Nisselson: That's right. Chris.

Chris Rill: I was going to add that the companies that are building these products should build in privacy and control, so that if you don't want cameras on you should be able to turn them off and trust that they're off. But there are other sensors that you can put around your home that still respect your privacy. Trying to understand if someone falls, well there are other types of cameras that you may not really care, necessarily, about the information and if it's going to the cloud. But I do agree that when we look at edge computing, that as semiconductors become less expensive and consumer product companies can put more compute at the edge, that allows us to do some very interesting things both respecting privacy and providing the value and context you need from these signals.

Tickets for our annual 2017 LDV Vision Summit are on sale
May 24-25 in NYC

Evan Nisselson: We've got about eight minutes left, so, we're gonna keep on talking here. Who has questions in the audience? Raise your hand and we've got mics that are going to be passed back there, but, there you go. We're ready already. Let's dig in, go.

Audience Question 1: Hey. My name's David Rose from Ditto Labs in Boston. I have a quick question about Canary. Once you have all these perches in people's homes, do you send video to any other cloud-based APIs for analysis? Or do any of your competitors?

Chris Rill: We do not. There are many competitors now. We've been around for four years, so people have kind of gotten word of us on Indiegogo a couple years back, so I don't know about any other companies that are sending their data to other third party APIs. I do know that there are, kind of, even in this room, other companies that do analysis to try to add context to video, which we do in-house.

Audience Question 1: Gotcha. Is that because of latency concerns? Or why wouldn't you send data to other cloud-based APIs for other services?

Chris Rill: Because, as a company, we do everything in-house or we try to do everything in-house. We have a computer vision team and we want to make sure that that's a defensible intellectual property that we have, so that if you're using a third party, you're gonna have to pay for that.

David Rose: A secondary question is, since you do have cameras in the homes, do you see that there's an opportunity to doing other services, like interior design consulting or bring my cleaner around or give me diet coaching. Based on the things you might know about behavior in the home.

Chris Rill: Not in the near term. We're really focused on the security value proposition. What's going on in my home? Are my kids okay? Is my pet okay? Aging and place type monitoring. Make sure my parents are okay. In the future, maybe. But really, there's so much to do in the security field that I don't see us going into that in the near term.

Evan Nisselson: Who else? Okay, okay. You'll have more chances later. Does anybody have questions? One of the things they say at this point is that one of the biggest challenges of any conference is everybody wants to meet other people. I always feel that when nobody asks questions it's impossible to know who you are. But those that ask the smartest, most interesting questions will inspire those other people to go find you. Look at that! It always works, it's amazing! It's just a very simple statement that just connects.

Patrick Gill: Thank you very much. I'm Patrick Gill from Rambus. I'm a computational, optical research scientist. We're developing, sorry a little bit of shameless self-promotion. We're developing a new kind of sensor that will not produce photographic quality of monitoring wide-angle, sort of in visual space, but may be able to address some of the privacy questions you might have.

Evan Nisselson: So, what's the question?

Patrick Gill: The question is, would it be worthwhile for companies like you to have some kind of sensor that's of intermediate resolution? Much more resolution than a PIR sensor for instance and yet would not be able to read the text if documents you're working on or be able to identify people by face. Is that a pressing concern that you folks are seeing from the industry of providing privacy against facial recognition data and snooping in case your devices are hacked?

Evan Nisselson: Gaile, Chris?

Chris Rill: Yes, we should talk. You know, there are places that cameras just aren't meant to go right now. Like bathrooms and other places where there's an expectation of privacy. Different types of sensors perhaps that fits in the obligation you have to create private environments in your home, for instance. That's something that I'm really excited about, because there are places our current Canary cannot go. We want to make sure that we're able to monitor all environments, but do so responsibly.

Gaile Gordon: I would agree and I think that beyond, and I don't know about your product, but thermal data, 3D data, there's a lot of types of data that can be used to understand occupancy and what's going on in a room that are not really invasive of your privacy.

Evan Nisselson: So, Jan, tell us, you mentioned in emails with me one of the aspects that you guys are working on is gesture and that's a big project.

Jan Kautz: Right.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Evan Nisselson: Give us some use cases of that. What you're working on that you can share and how does it relate to this topic?

Jan Kautz: Right. Gesture recognition is something we've been working on for awhile now. The first use case was in car user interfaces. If you buy a BMW 7 series it already has a gesture recognition interface.

Evan Nisselson: What does it do, what does it-

Jan Kautz: It's fairly limited. You can take a phone call. I think stop playing music. Things like that. There's only five or six different-

Evan Nisselson: So what do you do to take the phone call? You point to it?

Jan Kautz: I think you make a specific gesture. I forget what it was.

Evan Nisselson: Okay.

Jan Kautz: You point to it. There's sort of a cube in space where you gesture and it recognizes whatever gesture you did, but there's only five or six different-

Evan Nisselson: So those people I see doing that are not crazy? They're talking to their car?

Jan Kautz: They are gesturing to their car, yeah.

Evan Nisselson: Okay.

Jan Kautz: We have a system that allows more gestures and you can add new gestures on the fly and it's more reliable than the BMW one. The challenge here is that for user interfaces you really don't want latency. When you press a button you want immediate feedback, you don't want to wait half a second. The same for gestures. If you make a gesture and it takes half a second for anything to happen, you never know. Did it work? Did it not work? So you want immediate feedback. That's actually quite tricky for this type of interface. Because you don't know if the gesture's ending, has it ended, is it still ongoing? So you need smart algorithms to be able to very quickly, rule out any latency. That's something we have. It's actually related to action and activity recognition videos, the similar problems the gesture is slightly easier because it's more specific. So you know, yeah. There's only 20 gestures so it's easier to know what they will be.

Evan Nisselson: Got it. Any more questions from the audience and also questions you guys have for each other? You have three different opinions here that you can ask tough questions and we'll see if we get answers. Go.

Audience Question 2: Hi. You guys have all touched on the security industry in many ways. Do you see any trends that are going to get the security industry to use extra sensory data? Stereo cameras, range finders. Flir, I guess, is doing some infrared stuff, but is there any hope on the horizon for the very traditional security camera, mono camera?

Gaile Gordon: Maybe to just jump in with a quick ... There's security and then there's a lot of other applications which are very related. Retail, for instance. The retail environment has always been very interested in slightly different data compared to security. For instance, they want to know whether there's adults or children in the space. Which is a classic use of 3D sensors. So it's something required since you happened to mention 3D.

Audience Question 3: I've been working on a project called the Human Face of Big Data for the past couple years and one of the things we've found in the course of doing this project was that General Electric and GE were working on a series of products aimed at aging at home. I'm curious if the Canary ... One of them is called the Magic Carpet. You install a carpet in the home of your loved one and it actually builds, sort of a model, of what's normal for your own parent and then predicts that your parent may fall just from muscle weakness or a change in their base behavior. These devices that you always see on TV, you know, I've fallen and I can't get up. They have this sort of shame factor that nobody wants to wear them, but now there's this gamification of health. Are you integrating with any of the Jawbone or Apple watches or any of these other devices that are not your own stand alone device?

Chris Rill: We're not, but I do think that's an opportunity to help those that are aging in place. Twenty percent of our, sorry, different stat, about a third of our customers are actually 50 and over. They will start to age in place as they get older and it's an opportunity for Canary and other companies like Canary to provide services and technology to kind of monitor those folks that are getting older and may ultimately start living alone and need the assistance of people or technology to be more independent.

Evan Nisselson: Okay, great. We're just about out of time, but two quick questions that if you guys can answer in one sentence answers. What are you most excited about in this internet of eyes and/or visual sensor sector that's going to happen in 15 years or 10 years? Or sometime in the future that says, "Wow, I can't wait for-."

Gaile Gordon: My favorite would be augmented memory.

Evan Nisselson: Which is in one sentence?

Gaile Gordon: Which is, where did I meet this person last?

Evan Nisselson: Okay, perfect. Jan.

Jan Kautz: For me it's a confluent of three areas, which is computer vision, machine learning and robotics. In 10 years or in 15 years we'll see a lot of new, very interesting robots that have capabilities that we've never dreamt of.

Evan Nisselson: Like what? In one sentence.

Jan Kautz: People helping in your home. Like a butler.

Evan Nisselson: A butler. It's going to happen. Okay.

The conscious home is definitely achievable in the next 15 years...your home will be smart enough to really understand what you want or need

-Chris Rill

Chris Rill: I think the conscious home is definitely achievable in the next 15 years and I think cameras will help allow the computers to get the context that they need of what's going on. But it will be the combination of all these different signals that are coming in that will provide all the context. Hopefully your home will be smart enough to really understand what you want or need. Because today that's just not the case.

Evan Nisselson: Okay. So that last question before we go to network and there's a lot of topics I'm sure people will hunt you down and talk in more detail. Each of you, there's a lot of smart people in the audience that want to start businesses. What would you suggest they focus on that would be great for the industry? Separate from what you guys are focusing on. What's another thing that leveraging visual sensors that you can't wait for someone to start working on? Go, Chris.

Chris Rill: Well, I would say the advice to anyone looking to go into entrepreneurship is be very self aware of what you're not good at. You're not going to be able to do it alone. You're going to have to find partners that are exceptional at the things that you're not good at.

Evan Nisselson: Great. Jan.

Jan Kautz: I've thought about this for awhile. I couldn't come up with a good answer. I think I'm a researcher at heart, so for me, it's hardest to tell people what businesses they should start.

Evan Nisselson: Or even from your space. What should someone work on as research?

Jan Kautz: I think pick hard and interesting problems.

Evan Nisselson: What's the difference from hard?

Jan Kautz: Something people haven't solved for a long time. Computer vision was one of those problems, which now finally it's starting to work because of machine learning. I think pick hard and difficult problems.

Evan Nisselson: Gaile.

Gaile Gordon: I think taking the systems, full systems approach is the answer to success. You need to have something that works top to bottom and was made to work together.

Evan Nisselson: Fantastic. Round of applause for this panel. Thank you very much.

The annual LDV Vision Summit will be occurring on May 24-25, 2017 at the SVA Theatre in New York, NY.

The Smalls raised capital and have grown 300% year over year since competing at the LDV Vision Summit

Kate Tancred, CEO & Co-Founder of The Smalls at the 2015 LDV Vision Summit ©Robert Wright/LDV Vision Summit

Kate Tancred, CEO & Co-Founder of The Smalls at the 2015 LDV Vision Summit ©Robert Wright/LDV Vision Summit

The LDV Vision Summit is coming up on May 24-25, 2017 in New York. Through March 31 we are collecting applications to the Entrepreneurial Computer Vision Challenge and the Startup Competition.

Kate Tancred, CEO & Co-Founder of The Smalls, finalized in the 2015 Startup Competition at the LDV Vision Summit. The Smalls is a content marketplace that connects the world’s filmmaking talent with the world’s brands and agencies. Since the Vision Summit, Kate has been building The Smalls global operations. We caught up with Kate in February to find out more:

How have you and The Smalls advanced since the LDV Vision Summit?
Following the Vision Summit The Smalls raised investment with Russell Glenister an angel investor who was in the audience. The funding was used to make key hires and improve technology. Following this The Smalls has continued to grow at 300% per year and now has offices in both London and Singapore.

What are the 2-3 key steps you have taken to achieve that advancement?

  1. Surrounded myself with smart people both on my board and in my own time who have helped in all areas of the business.
  2. Focused on technology.

  3. Looked to international markets that are growing quickly in our space.
Kate Tancred at 2015 LDV Vision Summit ©Robert Wright/LDV Vision Summit

Kate Tancred at 2015 LDV Vision Summit ©Robert Wright/LDV Vision Summit

What project(s)/work is your focus right now?
I am focusing on adding key personnel to the business to further cement our position in the UK and assist in the growth in APAC. We are also focusing on new technologies for the business.

What is your proudest accomplishment over the last year?
Opening up office in APAC and our 2016 results.

What was a key challenge you had to overcome to accomplish that? How did you overcome it?One challenge we encountered recently was adapting the makeup of the team to suit our growing roster of clients and their needs. We decided to restructure and bring new skills into the business which really helped to ensure we were providing the best service possible. We have also found communication has become a new challenge as we start to operate in other markets. Gone are the days of us all sitting around a desk together discussing our thoughts and plans.

What are you looking to achieve in 2017?
Continued growth and expansion of our team and community.

Did our LDV Vision Summit help you and The Smalls? If yes, how?
Yes, it introduced us to our now director and angel investor Russell. It was also a great networking opportunity for the business and myself.

What was the most valuable aspect of competing in the Startup Competition?
The exposure and the feedback received from the amazing judging panel.

2016 Judges of the Startup Competition (in no particular order): Josh Elman - Greylock, Brian Cohen - Chairman of NY Angels, Jessi Hempel - Senior Writer at Wired, David Galvin - IBM Ventures Watson Ecosystem, Christina Bechhold - Investor at Samsung, Evan Nisselson - Partner at LDV Capital, Jason Rosenthal - CEO of Lytro, Barin Nahvi Rovzar - Exec. Director of R&D & Strategy at Hearst, Steve Schlafman - Principal at RRE Ventures, Alex Iskold - Managing Director of Techstars, Taylor Davidson - Unstructured Ventures, Justin Mitchell - Founding Partner of A# Capital, Richard Tapalaga - Investment Manager at Qualcomm Ventures ©Robert Wright/LDV Vision Summit

2016 Judges of the Startup Competition (in no particular order): Josh Elman - Greylock, Brian Cohen - Chairman of NY Angels, Jessi Hempel - Senior Writer at Wired, David Galvin - IBM Ventures Watson Ecosystem, Christina Bechhold - Investor at Samsung, Evan Nisselson - Partner at LDV Capital, Jason Rosenthal - CEO of Lytro, Barin Nahvi Rovzar - Exec. Director of R&D & Strategy at Hearst, Steve Schlafman - Principal at RRE Ventures, Alex Iskold - Managing Director of Techstars, Taylor Davidson - Unstructured Ventures, Justin Mitchell - Founding Partner of A# Capital, Richard Tapalaga - Investment Manager at Qualcomm Ventures ©Robert Wright/LDV Vision Summit

What recommendation(s) would you make to teams submitting their projects to the LDV Vision Summit competitions?
Hone your pitch, make it visual and entertaining.

Applications to the 2017 ECVC and the Startup Competition at the LDV Vision Summit are due by March 31, apply now.

Bijan Sabet Invests in Founders Building Inspiring Products That He Would Want to Work For

Bijan Sabet, General Partner & Co-Founder, Spark Capital with Evan Nisselson, General Partner, LDV Capital ©Robert Wright/LDV Vision Summit

Bijan Sabet, General Partner & Co-Founder, Spark Capital with Evan Nisselson, General Partner, LDV Capital ©Robert Wright/LDV Vision Summit

Join us at the next annual LDV Vision Summit.

This Fireside Chat, "Future investment trends and early stage opportunities in businesses leveraging visual technologies" is from our 2016 LDV Vision Summit. Featuring Bijan Sabet, General Partner & Co-Founder, Spark Capital and Evan Nisselson, General Partner, LDV Capital.

Evan: Next up is our next Fireside Investor Chat. I'm honored to bring up Bijan from Spark Capital. We're honored to have Bijan here for multiple reasons. Serial entrepreneur, successful investor, and passionate photographer. We started a pre-interview session a little bit ago, and tell us, the audience, which is a mixture of entrepreneurs, and researchers in computer vision, technology execs, why do you shoot with Hasselblad?

Bijan: It's a funny contrast given what I do for a living.

Evan: Exactly.



Bijan: Hasselblad, they make still cameras, and digital cameras, and medium format cameras, but I shoot a 20-year-old Hasselblad that shoots film, it shoots 120 millimeter film. For me, I discovered film, or rediscovered film, I guess, about three years ago.

Evan: What was that moment where you're like I'm going the other creative direction?

Bijan: I just started reading books about some of the masters, and was amazed at what I saw, and I just wanted to frankly emulate it. Not that I'm coming anywhere close to it, but I just found it really inspiring, and decided to explore film again.

Evan: Are you processing in the darkroom as well, and doing your prints?

Bijan: I don't do that, no. I did that in undergrad.

Evan: That's what I miss. You did do it in undergrad?

Bijan: Yeah, I did do it in undergrad.

Evan: The brown fingers from the fixer, and smelling afterwards.

Bijan: All the chemicals, yeah. No, I found a great lab in Southern California, and I send everything to them. It feels like the right compliment to spending time with digital products all day to have kind of an analog experience. Everything is slow, my Hasselblad takes 12 photos at a time.

Evan: Exactly. I used to shoot with a Rolleiflex, so very similar. Started with a Nikon F, Nikon FM, Rolleiflex, and unlike you I've gone the other direction rather than backwards. In 2003, I got rid of all of them and started with my camera phone, which you'll see in a photo if you're here tomorrow, the Sony P800, in 2003, and I've never gone back. Those twelve pictures, it's interesting, you mentioned you do one or two investments a year...

Skateboard Park, Venice, California. Leica M3, Kodak Portra 400 © Bijan

Skateboard Park, Venice, California. Leica M3, Kodak Portra 400 © Bijan

Bijan: Yeah, so I make more photos than new investments, I guess.

Evan: Actually, probably per fund how many investments would you make?

Bijan: We make about 30 investments per fund.

Evan: No, but you personally?

Bijan: Oh, personally about five or six.

Evan: So there's almost a connection between the number of images in a roll, and...

Bijan: Yeah, I hadn't thought about that.

Evan: That's my role as a moderator.

Bijan: Thank you.

Evan: Do you say take pictures or make pictures?

Bijan: I say make pictures.

Evan: Yeah. Well done.

Bijan: For me, it feels quite good. I don't make fire with two rocks and all that. I still am really excited about what's happening with digital, and connected products, and computer vision, and all sorts of social experiences, but I think if you haven't picked up a film camera in awhile you should do it.

Evan: I agree. We both were operators and went to the investing side. You've been investing a lot longer at a big fund with some great successes. What was the hardest part of that transition? How did you get past any difficulty that might have existed?

Bijan: The hard part on a personal level was I didn't know what I was getting into. Literally 11 years ago I was not an investor, so I didn't know - would I be any good at it? would I like it? What would it be like actually doing this every single day? There were a lot of questions, it was mostly the unknown, and I would say I'm still figuring it out. What the next ten years will look like, compared to the last ten years, it will probably look completely different. I think that's the fun part, but it's been humbling for sure.

Tickets for our annual 2017 LDV Vision Summit are on sale now
May 24-25 in NYC

Evan: One of my challenges was that as an entrepreneur I always felt that, every single day I could tell whether I was moving the ball forward or backwards. As an investor, it's more of a coach rather than doing, and getting too into the weeds is the wrong thing to do. That for me after 18 years as an entrepreneur was hard. Have you ever experienced that kind of thing?

Bijan: Yeah, for sure. Especially this pace I'm on one or two new investments a year, I'm meeting a lot of companies but only getting involved in a select few. When I first started it was kind of like am I being productive, and I had some mentors that really helped me think about it differently. I'm learning everyday, I'm helping or at least trying to help people, and I guess I am moving the ball forward that way. But in the beginning I felt like, was this really what I was supposed to be doing?

Evan: Sometimes every week I felt like oh, no, don't do that, back up again, no, no, and kept on going back. From the entrepreneur world we have success and failures, and also on the investors' side. What's the biggest mistake you made as an entrepreneur that you learned the most from?

Bijan: I think the late '90s were quite instructive. I was at a company called WebTV in the mid-'90s and that was a great success on many levels: the people, product, outcome was fantastic. The next company was a company called Moxy Digital, that didn't work. We started the company in 1999, we raised $60 million in our series A at 200 pre, with no product, and we spent it, and learned a lot of lessons.

Evan: What was the one big learning lesson? Is there one that actually helps you now on the other side of the table?

Bijan: For sure. I think some of this stuff is fairly well discussed today around MVP and being capital efficient. In those days - it feels like 100 years ago but it's still very painful to some extent - really thinking about being much more mindful and focused and capital efficient are things that I'll never forget from that experience. We were thinking about things completely differently in terms of trying to get big fast, we thought we knew it all and we didn't.

Evan: A couple months ago you wrote a blog post which I loved, Less Things, Better, and it's about setting priorities and focus in business and personal life. It's a challenge I try to figure out all the time. You wrote a list, and then towards the end I think I recall you said you reviewed the list and you said “I think maybe I've got too big of a list, but it’s pretty core stuff.”

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Bijan: I think that is the challenge in startups or in personal life - you sit in a board meeting, and you see there all these things to do this year. It's like do we really have to do all these things? Oftentimes, I'm finding as an investor, I'm in a board meeting thinking this stuff is so obvious, and then when I think about our own firm or my own life you realize that this stuff is really hard. This list I made for myself, you're referring to this blog post I wrote, looking back now the list is ridiculous, there's way too many things.

Evan: I read it the first time, I said oh, that makes sense. Then I got to your comment ‘maybe it's a little long,’ I reread it, and it's like yeah, well, it's probably impossible to do all that, but it's the right goals for a year. What were a couple of things, do you remember a couple of those things really quickly just so the audience knows?

Bijan: Yeah, I'm trying to be a better father, better husband, better partner. I think I'm going to lose a couple hundred pounds. I was going to run around the world. It's almost in every dimension I was trying to improve. I think this work life balance we're all trying to juggle, this is still a work in progress with me, but I think it's a bit of a myth. I think you have to kind of pick which ones you really want to excel at.

Evan: You have to sacrifice some of the others.

Bijan: Yeah, something's going to give.

Evan: Right. Looking at this list and the priority question about entrepreneurs, and the ones that succeed are probably focusing on different priorities. When you see a company in your portfolio, how do you address that if they're doing too many things, or how do you coach them in a way so they might prioritize different things?

©Dean Meyers/Vizworld

©Dean Meyers/Vizworld

Bijan: I think the most compelling time is when the company is struggling, you really have to pick and choose. We see some founders do this almost instinctively, and others it's more difficult. It's like if we don't do this, then this is going to be a failure, or why should we do it. One example I went through recently, I've been on the board of a company called Run Keeper for about four or five years. The company was just sold recently, and it was a great outcome for the founders and team and all that, but that company was in a crisis period. It had maybe six months of cash, and had a bunch of products in the market, and was going through a tough time. Nobody wanted to invest in the company, no new investor.

The CEO on his own, he didn't do it in the middle of the night, he came to the board and said this is what I want to do. He cut the burn by a third, he dropped one of the products, just shut it down completely, focused on the core business, had a real chance of getting to profitability in a short period of time, and then he had this great outcome. He sold the company for just under $100 million, he owned the biggest stake in the company, the employees did great, and it wouldn't have happened if he'd kept going down this path of we got to have all these initiatives.

Evan: I think that's great, the outcome is great. Could you filter down that transitional period? In order to help the audience, the entrepreneurs how do you figure out “what should I try to do,” “how should I try to filter” or analyze or rate the things? Do you have any processes that you guys go through with some of your companies or entrepreneurs personally, one on one, like how do we figure this out together?

Bijan: I don't have a secret answer here, but I do think it's trying to be as focused as you can on thinking what's the most important thing we have to get done. Oftentimes we get involved in the company, it's in your earlier stages of the business, and almost by definition that means the capital we're investing is going to be insufficient for the next phase of the company, or for the company to reach its fullest potential. I think at that moment since we're all mindful that we're basically in deficit financing mode, we have to think about it like what is the most important thing we have to accomplish for the company to get to the next stage of the company's mission. I think distilling it down to that, trying to make it a more simpler task is key.

If you could stay capital efficient, and you could find great partners and investors, then you can be singularly focused on what's the most important thing at the time. 

-Bijan Sabet

Evan: Relating around the mission a very direct statement of which everything else that doesn't directly tie to that is not dealt with.

Bijan: Yeah. Oftentimes you hear about companies like “hey, there's no business model” initially, and I feel that can be confusing as well because if you start firing up the business model too early it's just another thing you got to worry about, and then you're diluting efforts and everything else. If you could stay capital efficient, and you could find great partners and investors, then you can be singularly focused on what's the most important thing at the time.

Evan: At that point you're talking about capital efficiency. Many in the audience know you and Spark, but for those that don't, could you give a couple sentences about fund size, ideal profile investment, and how early.

Bijan: There's no too early for us. We have offices now in Boston, New York, and San Francisco, and we get involved either at the seed stage or the Series A stage, but we also I would say 20% of what we do now are later stage investing.

Evan: The Seed Stage is roughly how much?

Bijan: Seed Stage is half a million dollars and up.

Evan: And the A?

Bijan: It varies, but it's maybe $4 to $6 million, something like that.

Evan: There's a lot of researchers also in the audience that this is black magic, they have no idea. We're balancing a little bit of the “those that know and those that don't.” Also you've done several deals that relate to visual content, leverage it, it's a byproduct, or it's core to the business, Twitter, OMG Pop, and another one, Lilly Robotics. Tell us what got you excited about that, and what were the kind of behind the scenes questions, was it going to work, is it not going to work.

Bijan: With Lilly, in particular?

Evan: Yeah.

Bijan: With Lilly Robotics, if people don't know, it's a flying camera essentially. Some people look at it as a drone, or something like that. For me, the real excitement was it's a flying camera.

Evan: It's like the polar opposite to the Hasselblad.

Bijan: I guess you're right.

Evan: I love both. I'm just trying to understand.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Bijan: The connective tissue here is that you're making art, or you're making creative media, and our frustration with the other products in the market, it was really tailored towards people that were interested in flying and piloting. If you look at products like DGI, or products like DGI, it's this big honking controller, and there's a serious learning curve because it's a piloting system. We really felt like there's the untapped potential here is what happens when you take a camera and you make it a flying camera, and you're not piloting, you're just creating work. I think that's the opportunity, and that's why I got excited about it.

Evan: The double edged sword there is for the success in the business you cannot wait for all of us to have our hovering cameras waiting for us outside like a limo.  The negative obviously is that there's 500 cameras outside waiting and floating in the air for us.

Bijan: Right. I think it's like anything. You go to a concert, and you'll see fans just have their camera phones out, and you can ask “are they enjoying the music or are they just too busy Snapchat’ing the content?” I think it's both, and I think this stuff will find its happy equilibrium.

Evan: What's the one activity that you cannot wait to do, that you cannot wait to be photographed by your Lilly Camera?

Bijan: For me personally it's going to be hiking, but I think we're hearing all sorts of different use cases from people, soccer, swimming, windsurfing, family birthday parties. Somebody recently talked to us about a wedding. I think the use cases are pretty diverse, but it's been very interesting. There's this one recent moment we went to a park in San Francisco after a board meeting, and it's right near a school playground, so there's a fence between the school playground and the public park, and we were out there flying it and all of the school kids see the Lilly guys testing it all the time, so they climbed on the fence and they were like it's the Lilly guys. It was really exciting.

Evan: That's cool. That's great. That kind of ties in across the whole spectrum of this summit. You guys invested in Cruise, and had a great exit with that, Oculus, and also content sites, Tumblr and Twitter, so both spectrums which relate to many of our sessions here. One became the title of our pre-interview, which I loved your statement, was the photographs can tell a powerful story that is unique to any other creative format. One of the things we're going to talk about is 2D, 3D, 360. Why to you is a photograph better than a gif, or better than a video? What are your thoughts there to come up with that conclusion?

Bijan: I just think if you look at history or contemporary times it can tell the story in the most compelling way. In some ways, whether it's that US airplane that landed in the Hudson River, with that Tweet, that photo that went around the world...

Evan: When they were standing on the wings, right, all the people?

Bijan: Right. That person took that photo on a camera phone that by today's standards is fairly low res, but that one picture told that story better than any other news report or anything. You see it today with the Syrian refugee crisis, with that child. These photographs are iconic, and I think we've seen over and over again that this is probably the most powerful format ever, and I think it's going to continue to be that way.

Speakers for the annual 2017 LDV Vision Summit include Albert Wenger of Union Square Ventures, Clement Farabet of Twitter, and many more. Check out the evolving list here.

Evan: In relation to that, the great segue is I've still got questions, I've got some evolved ideas the more I've seen recently, 3D content.

Bijan: VR or 3D?

Evan: 3D. Forget about VR. I mean 360 and 3D, without looking at VR for now, do people you think, will people eventually want to see more 3D than 2D?

Bijan: Yeah.

Evan: Let's put 3D roughly like 360, this interactive still, which is not a video, or it's not a gif, but it's this interactive, will five ten years that be the norm and 2D that be like black and white film?

Bijan: There's nothing wrong with black and white film.

Evan: It's a great art, which is all I'm saying. I love black and white, and I still turn some of my colors black and white. It's not a negative thing, but I'm trying to figure out is Instagram, will it be fully immersive pictures, or some other site that's fully immersive, that will be more engaging?

Bijan: It's hard to predict what Facebook's going to do with Instagram. They own Oculus, they own Instagram, so where that's all headed ... I think each of these are going to be its own experience. I think 3D and 360 is a Band Aid to get to VR.

Evan: Why's that?

Bijan: In my view, it's neither fish nor fowl. It's a 'tweener. What we have today with 2D photographs, printed or on screen, that's a durable format for the ages. Gifs are great, I don't think they're going anywhere, but I think 360 is a bit of a hack until VR is fully mainstream, and I think it's going to be mainstream.

Evan: How long do you think it's going to take?

Bijan: It's going to take awhile.

Evan: I know that's the hard question, but I have to ask it or I wouldn't be doing my job. For everybody in the audience, is it five years, is it ten years, is it twenty years?

Bijan: When we invested in Oculus, I think our view was that it was five years out.

Evan: To the masses?

Bijan: Mass market, yeah, for gaming. That was our projection, for gaming. Post acquisition, obviously we have nothing to do with the business anymore, but I think in some ways it's five years out again. You know, this was five years ago, but the significant difference is that Facebook's ambition and audacity for Oculus is much bigger than our ambition, and I was going to say audacity, but maybe naïveté. They're not just in it for gaming, they're in it for everything. I think that's why another five years isn't kicking the can, it's more the aperture got bigger, not to have too many double entendres.

Evan: I like that. That was good.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Bijan: I think that's the reason why it's just probably going to take longer, but I think that the experience is so compelling that I really believe that it's going to happen. It's just too compelling. Versus 3D movies, it's like I don't feel like that's compelling. It's cute.

Evan: I guess the question there is that experiencing that content whether or not it's gaming or other activities, wearing the gear or holding up some cardboard version or evolution, there's going to be a different activity, and it might be 24/7 one day.

Bijan: I hope not. Yeah.

Evan: It might not be, but up until then I wonder whether or not that 3D and 360 images will be everywhere, and become more normal until headsets are prevalent.

Bijan: It might be, but I don't think you're going to live inside the Oculus 24/7, or HDC or whatever competing thing. I think it's going to be when you want that experience you're going to have that experience, and then when you don't want that experience you're going to live your life, and that's okay. I don't think this is an “or”, it's an “and.” I think that's where it's exciting. I mean there's no reason why, in the future, I couldn't do this with you, but I could be in Boston, and it will be as lifelike and realistic as me being here.

Evan: I cannot wait. It'll be fun.

Bijan: Maybe I'm not here right now.

Evan: Nobody will be here right now, we'll be in many different countries.

Bijan: It's possible, yeah. We'll have people from all over the world participating in a way that they cannot do that today, and I think that's exciting.

Evan: I think it is if it becomes immersive enough where we feel like we're there. We purposely do not do live streaming of this event because either you're here or you're not, at least for now.

Another thing I read I'd like to try to understand a little more is how you personally look for companies, and the types of companies you look for. At least on your profile page it says you like to look for companies with new approaches to building communities through the sharing of ideas and interests. Obviously Twitter and Tumblr relates to that, but what do you see going forward with this world of computer vision and others? Are there intrigues that you might have as are you waiting for that to happen, or is it more serendipity when they come through along with the trends?

Bijan: I think what you just described and what I've been excited about is more around shared experiences whether it's in these creative tools like Tumblr, or even in workplaces and teams like what you see with Slack and Trello.

Evan: Trello is one of your deals as well, right?

Bijan: Yeah, we're investors in both. I think that that to me is exciting. There's a lot of energy these days around bots, and things like Alexa and this Google Home thing, and I think they're amazing products from a computer science point of view, or AI/ML point of view, but they don't really do anything for me. I really feel like they're missing the people side of these things. The reason why when I walk into a venue and I see all the FourSquare tips from other people, I find it much more exciting than Alexa telling me to go grab a cappuccino down the street. Her the movie wasn't as exciting as a FourSquare tip. I'd rather have people power the internet versus some machine in Mountain View.

Evan: If AI is working along the learning from the masses, it's basically a collective group of people delivering Her, and so it could be the next generation FourSquare when we come back a couple years, and you say actually I did invest in the next one, which is delivering.

Bijan: The synthesis of the planet.

Evan: Exactly.

Bijan: Remember in Her she was dating a thousand people at the end of that.

Evan: Exactly. That's great. It's insane. We've got plenty more time, and I want to offer up to the audience any questions, and we're going to interact, and so raise your hands if you've got questions. I will keep on firing away. Anybody have one right now? Go ahead.

Audience Question 1: What are your tips for entrepreneurs who are looking to raise capital, especially in the current funding situation?

Bijan: My own take on the current funding environment is at the early stages what you read in the headlines, just ignore it, literally ignore it. I think it's mostly irrelevant. I think the headlines about this volatility, and it's tougher times ahead, it's really for companies with massive valuations and big burn rates. If you're starting a company today, you don't have to worry about either.

I don't think anything's happening in the public markets or macro volatility should discourage anybody from building the next great company.

-Bijan Sabet

Bijan: My own take on the current funding environment is at the early stages what you read in the headlines, just ignore it, literally ignore it. I think it's mostly irrelevant. I think the headlines about this volatility, and it's tougher times ahead, it's really for companies with massive valuations and big burn rates. If you're starting a company today, you don't have to worry about either.

The issue really is, we've got this funk where you have tremendous amount of pressure - public markets are putting pressure on tech companies, and as a result it just kind of goes downwards. At the same time, you have entrepreneurs building extraordinary companies based on amazing ideas, and you have venture capitalists that have never had bigger funds ever, ever, ever. There's no shortage of great ideas getting funded. It's not the case where entrepreneurs are out of ideas, and VCs don't have any money. We're exactly the opposite. We have both of those pieces at work here. I don't think anything's happening in the public markets or macro volatility should discourage anybody from building the next great company.

Audience Question 2: We talked a lot about VR. What's your view on AR? How far do you see that away from being a mass market product?

Bijan: I want to like AR. I really want to like AR. I haven't gotten there yet. I like the concept a lot, but I haven't seen any AR demos, and I've seen a number of them recently in the past, and I don't feel like we're there. I guess the most interesting ones, although it's not exciting, has been like automotive heads-up display systems. I think those are valuable, but by and large I thought even if Google Glass is a real thing, and it was displaying content on the real world, for me that's not, I just haven't found those product demos super compelling. I'm much more in the VR camp than the AR camp. I think the VR one is more intentional, and the experiences are much more interesting.

I want to like AR. I just haven't gotten there yet.

Audience Question 2: Do you think the situation is different for enterprise applications?

Bijan: There might be vertical applications I'm just not thinking about where AR can play a big role, and the experience maybe is more of a functionality than otherwise, but I just haven't seen it. Like for example I've seen some around architecture and things like that, but even that I'd rather have videos or VR. I'm open to other things. Like I said, I want to see it, I just haven't yet.

Evan: The example I'll give you is we invested in a company called Apx Labs[now called UpSkill], which is AR for B2B enterprise, so manufacturing, healthcare, logistics, and it's amazing what they're doing in those spaces on an ROI basis. I think before we get to VR it's actually going to be more AR opportunities, even though VR might be a holy grail until the next holy grail happens of dating thousands of people simultaneously.

Back to this kind of attributes of visual content combined with people sharing, I love this kind of seems like a core passion of yours and belief in value creation. When companies come to you, let's not talk about the ones that you already know, the more challenging for the audience are the entrepreneurs saying wow, “Bijan's smart, I want to meet him at the right time for our business,” which leads to the question, when is that? The second is when is there enough validation in a new relationship of a company that will inspire you?

The product vision or product itself must be inspiring, that's the paramount thing. The other part is, can I imagine if I wasn't doing what I was doing for a living, would I imagine that I would legitimately want to work for these founders. If those two things don't compile, then I tend not to want to invest. 

-Bijan Sabet

Bijan: We get involved pre-product as well as post-product, so it's not a particular stage. I really look for personally a few things that I care about, and then the stage is almost less specific.

Evan: What are those?

Bijan: I feel like it is the product - product vision or product itself must be inspiring, that's the paramount thing. The other part is, can I imagine if I wasn't doing what I was doing for a living, would I imagine that I would legitimately want to work for these founders. If those two things don't compile, then I tend not to want to invest.

Evan: That second one, I like the way you said that because you actually would be working for them and vice versa, as a coach as we talked about earlier.

Bijan: Yeah.

Evan: It's basically investing in somebody that you would want to collaborate with, and your example is join them on a team, but you're in effect joining them as an investor.

Bijan: Yeah, and I actually mean, literally joining as an employee, if I wasn't doing what I'm doing. If I was unable to say yes to that question then how am I going to convince their future VP of Marketing that this is where that person should work, and so on and so forth. At the times where I've kind of strayed from that in the last 11 years is the times where I'm like, ah, I wish I had kept that personal criteria going, and the times it's worked out is where that criteria I kind of stayed true to it. It doesn't work for every investor has their own point of view of what makes them excited, but on a personal level that's what I consider.

Evan: Have you noticed traits of those people that work better?

Bijan: Yeah. There's big character differences or people differences between like a David Karp and a Biz Stone, they're very different people, or Palmer Lucky or Michael Pryor of Trello. They're quite different, but they all have a very mission driven sense of why they're doing what they're doing, and I feel like it resonates for me.

David Karp: Leica MP, Leica 50mm Summilux, Kodak Tri-X 400 ©Bijan

David Karp: Leica MP, Leica 50mm Summilux, Kodak Tri-X 400 ©Bijan

Audience Question 3: As a fellow photography enthusiast, more like a personal question to you - connecting startup to photography and such, do you have at a personal level things you believe are missing in the current photography, from technical as in camera point of view to post processing like out of focus issues or low light issues, or how to make a better picture composition? There are a bunch of things which I feel are open there, it's not solved yet. Do you think there's something out there which could be a company out there on any of these, and which one would you bet on?

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Bijan: On the device hardware side I feel we have all the capability we need now from an iPhone to a Hasselblad. If you're not making great photos, it's not the camera's fault anymore. It may not even be true 50 years ago, but it's certainly not true today, I think that there's still a lot of headache with sharing, collaborating, storing, backing up. That's still a mess. Whether it's a venture startup opportunity or not, it remains to be seen I guess. In families I find, it's not even my own family, although it's certainly true with our case, sharing, and kind of like oh, you took those pictures on family vacation, I did too, kind of that whole thing seems to be kind of crazy now. We're all uploading to places like Facebook, but they compress the hell out of the photos, they look like shit after a while. I just think that cannot be the answer long term.

There's still I feel like a missing piece on, loosely speaking, I'll call it workflow, backups, sync sharing amongst people that you care about. I don't know where that leads us.

Evan: When you make a picture in any format, how do you choose where to share it?

Bijan: Tumblr's my go-to place for things that I want to publicly share. If not, I'm still old-school. I have a private Flickr account just for our family, my own family, my brother's family, my mom and dad, because my parents aren't on Facebook.

Evan: They're on Twitter ... I mean they're on Flickr.

Old Car, Mission Street, SF: Hasselblad 503cw, Kodak Portra 400 ©Bijan

Old Car, Mission Street, SF: Hasselblad 503cw, Kodak Portra 400 ©Bijan

Bijan: They are on Twitter, yeah, but they're part of the private Flickr group. It seems a little ridiculous that I'm still using Flickr, but that's the one use case.

Evan: It isn't really coming from the spectrum of our conversation, it seems like it makes sense.

Bijan: It does, but given what's happening in the market it feels like I better find a new answer.

Evan: I feel you should stay there.

Bijan: I adore Tumblr. I think it's still my favorite way to share my photos.

Evan: Do you ever put any photos on Twitter? I think I've seen some.

Bijan: I do. I use Twitter for different things, but yeah, I definitely do.

Evan: Is it different type of photographs, or different kind of public events, versus your own photography that you do with your Hasselblad and it's more going to make pictures?

Bijan: I definitely share photos from my Hasselblad on Twitter, but it's something about Hasselblad is this six-by-six, it's this big photo, and then sharing it on a mobile phone, I'd like to think people are seeing it on a bigger screen, but that may just be naive thinking on my part. Twitter for me is just being part of the public conversation. I love Twitter, but it's less about photo sharing than everything else.

Evan: One of the questions that I frequently ask everybody, and I like the composite of all the answers, is very simply, and I asked Howard Morgan earlier, in one word answers your favorite personality trait of an entrepreneur, and your most disliked personality trait, in one word answers.

Bijan: Creative, on the plus. On the negative, for the ones that I have a harder time, indecisive. Indecision with the founders, if I were to pick on one thing. It's hard to beat. We're all human, we're trying to figure this stuff out.  

Evan: Mine is passion and selfishness. Obviously some people would say selfishness is a good thing or a bad thing, but I look at it as a horrible thing because it's not team player, it's not building the company, it's all about the individual, which sometimes works sometimes doesn't work for that. The reason I like the one word answers even though they're hard is because they're very easily actionable to people when they hear it. On that note, round of applause for Bijan. Thank you very much.

Bijan: Thank you very much for having me.

The annual LDV Vision Summit will be occurring on May 24-25, 2017 at the SVA Theatre in New York, NY.

Hired at Facebook After Showcasing Research in Visual Technology at the LDV Vision Summit: An interview with Divyaa Ravichandran

Divyaa Ravichandran, from CMU and she showcased her project “Love & Vision” ©Robert Wright/LDV Vision Summit  

Divyaa Ravichandran, from CMU and she showcased her project “Love & Vision” ©Robert Wright/LDV Vision Summit

The LDV Vision Summit is coming up on May 24-25, 2017 in New York. Through March 31 we are collecting applications to the Entrepreneurial Computer Vision Challenge and the Startup Competition.

Divyaa Ravichandran was a finalist in the 2016 Entrepreneurial Computer Vision Challenge (ECVC) at the LDV Vision Summit. Her project, “Love & Vision” used siamese neural networks to predict kinship between pairs of facial images. It was a major success with the judges and the audience. We asked Divyaa some questions on what she has been up to over the past year since her phenomenal performance: 

How have you advanced since the last LDV Vision Summit?
After the Vision Summit I began working as an intern at a startup in the Bay Area, PerceptiMed, where I worked on computer vision methods to identify pills. I specifically worked with implementing feature descriptors and testing their robustness in detection tasks. Since October 2016, I’ve been working at Facebook as a software engineer. 

What are the 2-3 key steps you have taken to achieve that advancement?
a. Stay on the lookout for interesting o