45 Billion Cameras by 2022 Fuel Business Opportunities

LDV Capital - 5 Year Visual Tech Market Analysis 2017.001.jpeg

Exclusive research by us at LDV Capital is the first publicly shared, in-depth analysis which estimates how many cameras will be in the world in 2022. We believe it is a conservative forecast as a additional sectors will be included in future research.

The entire visual technology ecosystem is driving and driven by the integration of cameras and visual data. Visual technologies are any technologies that capture, analyze, filter, display or distribute visual data for businesses or consumers. They typically leverage computer vision, machine learning and artificial intelligence. 

Over the next five years there will be a proliferation of cameras integrated into products across industries and markets. A paradigm shift will take place in the meaning and use of a camera.

Taking into account the industries that will embed cameras into products, those that will add additional cameras to products, and new vision-enabled products that will arise, the number of cameras will grow at least 220% in the next five years. 

This growth in cameras delivers tremendous insight into business opportunities in the capture, analysis and interpretation of visual data. Cameras are no longer just for memories. They are becoming fundamental to improving business and society. Most of the pictures captured will never be seen by a human eye.

This 19 page report is the first of a multi-phased market analysis of the visual technology ecosystem by LDV Capital. Facts and trends include:

  • Global Camera Forecast
  • Paradigm Shift in Visual Data Capture
  • Depth Capture & New Verticals Driving Growth
  • LDV Market Segments To Watch
  • Visual Technology Ecosystem Growth
  • Processing Advances Enable Leaps in Visual Analysis
  • War Over Artificial Intelligence Will Be Won with Visual Data

Key Findings:

  • Most of the pictures captured will never be seen by a human eye.
  • A paradigm shift will take place in the meaning and use of a camera.
  • Over the next five years there will be a proliferation of cameras integrated into products across industries and markets.
  • Where there is growth in cameras there will be tremendous business opportunities in the capture, analysis and interpretation of visual data.
  • Depth capture will double the number of cameras in handheld cameras.
  • By 2022, the number of cameras will be nearly 12X the 2012 figures.
  • Your smartphone will have between 4 and 10 cameras by 2022.
  • The Internet of Eyes will be larger than the Internet of Things. 
  • In the next five years, robotics will have 20X more integrated cameras.
  • By 2022, all new vehicles will be equipped with more than 25 cameras and this does not include Lidar or Radar.

Download the full report from our Insights page.

We look forward to hearing your insights, learning about your startups and reading your research papers on how businesses are addressing these challenges and opportunities.

Timnit Gebru Wins 2017 ECVC: Leveraging Computer Vision to Predict Race, Education and Income via Google Streetview Images

Timnit Gebru, Winner of the 2017 ECVC © Robert Wright/LDV Vision Summit

Timnit Gebru, Winner of the 2017 ECVC © Robert Wright/LDV Vision Summit

Our annual LDV Vision Summit has two competitions. Finalists receive a chance to present their wisdom in front of 600 top industry executives, venture capitalists, and companies recruiting. The winning competitor is also awarded $5,000 Amazon AWS credits. The competitions:

1. Startup competition for promising visual technology companies with less than $2M in funding

2. Entrepreneurial Computer Vision Challenge (ECVC) for computer vision and machine learning students, professors, experts or enthusiasts working on a unique solution to empower businesses and humanity.

Competitions are open to anyone working in our visual technology sector such as: empowering photography, videography, medical imaging, analytics, robotics, satellite imaging, computer vision, machine learning, artificial intelligence, augmented reality, virtual reality, autonomous cars, media and entertainment, gesture recognition, search, advertising, cameras, e-commerce, visual sensors, sentiment analysis, and much more.

The ECVC provides contestants the opportunity to showcase the technology piece of a potential startup company without requiring a full business plan. It provides a unique opportunity for students, engineers, researchers, professors and/or hackers to test the waters of entrepreneurism in front of a panel of judges including top industry venture capitalists, entrepreneurs, journalists, media executives and companies recruiting.

For the 2017 ECVC we had an outstanding lineup of finalists, including:

  • Timnit Gebru, PhD from Stanford University on “Predicting Demographics Using 50 Million Images”
  • Anurag Sahoo, CTO and Mick Das, CPO of Aitoe Labs
  • Akshay Bhat, PhD Candidate and Charles Herrmann, PhD Candidate from Cornell University on “Deep Video Analytics”
  • Elena Bernardis, PhD of the University of Pennsylvania Children’s Hospital with “Spot It - Quantifying Dermatological Conditions Pixel-by-Pixel”
  • Bo Zhu, PhD of Harvard Medical School’s Martinos Center for Biomedical Imaging presenting “Blink” about synthetical human vision
  • Gabriel Brostow from University College London with “MonoVolumes” a combination of MonoDepth and Volume Completion to understand 3D scene layout

Congratulations to our 2017 LDV Vision Summit Entrepreneurial Computer Vision Challenge Winner: Timnit Gebru  

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

What was the focus of your winning research project?
We used computer vision algorithms to detect and classify cars in 50 million Google Street View images. We then used the characteristics of these detected cars to predict race, education, income levels, voting patterns and income segregation levels. We were even able to see which city has the highest/lowest per capita CO2 footprint.
 
As a PhD candidate - what were your goals for attending our LDV Vision Summit? Did you attain them?
I mostly wanted to meet other people in the field who might have ideas for future work or collaborations. After the competition, I was contacted by venture capitalists and people whose startups are working on related things. In addition to that, I received some interesting ideas from  conference attendees (e.g. analyzing the frequency of trash collection in neighborhoods to get some signal regarding neighborhood wealth).
 
Why did you apply to our LDV Vision Summit ECVC? Did it meet or beat your expectations and why?
I applied because Serge Belongie (Professor at Cornell Tech and Expert in Residence at LDV Capital) thought it was a good idea. One of his many research interests is similar to my line of work. Since our work has real world applications, I think he felt that presenting it to the LDV community would help us think of ways to make it more accessible. I didn’t know what to expect but it definitely beat my expectations. I have never been at a conference that brings together entrepreneurs who are specifically interested in computer vision. I didn’t know there that the vision community was so large, and that many VCs were thinking of companies with a computer vision focus (this is different from thinking of AI in general).
 
Why should other computer vision, machine learning and AI researchers attend next year?
This is unlike any other conference out there because it is the only conference I know of that is only focused on computer vision but also brings together researchers, investors and entrepreneurs. 
 

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

What was the most valuable part of your LDV Vision Summit experience aside from winning the ECVC?
Meeting others whose work is in a similar space: for example, people who founded companies that are based on analyzing publicly available visual data. One of the judges founded such a company. It helped me think of ways in which my research could be commercialized (if I decided to go that route).
 
Do you have any advice for researchers & PhD candidates that are thinking about evolving their research into a startup business and/or considering submitting their work to the ECVC?
I advise them to think of who exactly their product would benefit and what their API would be like. Even though I was an entrepreneur for about a year, I am still coming from a research background. So I wasn’t thinking about who exactly the customers of my work would be (except for other researchers) until my mentoring sessions with Evan [Nisselson, GP of LDV Capital].
 
What are you looking to do with your research & skills now that you have completed your PhD?
I will be a postdoctoral researcher continuing the same line of work but also studying the societal effects of machine learning and trying to understand how to create fair algorithms. We know that machine learning is being used to make many decisions. For example, who will get high interest rates in a loan, who is more likely to have high crime recidivism rates, etc...The way our current algorithms work, if they are fed with biased datasets, they will output biased conclusions. A recent ProPublica investigation started a debate on the use of machine learning to predict crime recidivism rates. I am very worried about the use of supervised machine learning algorithms in high stakes scenarios.
 

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Thank You for Making Our 4th Annual LDV Vision Summit a Success!

Startup Competition Judges Day 2: (in no particular order) Judy Robinett, JRobinett Enterprises, Founder, Author "How to Be a Power Connector", Tracy Chadwell, 1843 Capital, Founding Partner, Vic Singh, General Partner, ENIAC Ventures, Zack Schildhorn, Lux Capital, Partner, Jenny Fielding, Techstars, Managing Director, Emily Becher, Samsung, Managing Director, Clayton Bryan, 500 Shades, 500 Startups Fund, Venture Partner. Dorm Room Fund, Partner, Jessica Peltz-Zatulove, KBS Ventures, Partner, Eric Jensen, Aura Frames, CTO, Claudia Iannazzo, AlphaPrime, Managing Partner, Scott English, Hearst Ventures, Managing Director©Robert Wright/LDV Vision Summit

Startup Competition Judges Day 2: (in no particular order) Judy Robinett, JRobinett Enterprises, Founder, Author "How to Be a Power Connector", Tracy Chadwell, 1843 Capital, Founding Partner, Vic Singh, General Partner, ENIAC Ventures, Zack Schildhorn, Lux Capital, Partner, Jenny Fielding, Techstars, Managing Director, Emily Becher, Samsung, Managing Director, Clayton Bryan, 500 Shades, 500 Startups Fund, Venture Partner. Dorm Room Fund, Partner, Jessica Peltz-Zatulove, KBS Ventures, Partner, Eric Jensen, Aura Frames, CTO, Claudia Iannazzo, AlphaPrime, Managing Partner, Scott English, Hearst Ventures, Managing Director©Robert Wright/LDV Vision Summit

Our 2017 Annual LDV Vision Summit was an absolutely amazing event, thanks to all of you brilliant people.

YOU are why our annual LDV Vision Summit gathering is special and a success every year. Thank You!

We are honored that you fly in from around the world each year to share insights, inspire, do deals, recruit, raise capital and help each other succeed!  

Congratulations to our competition winners:
- Startup Competition:  Fantasmo.io, Jameson Detweiler, Co-Founder & CEO
- Entrepreneurial Computer Vision Challenge: Timnit Gebru, Stanford Artificial Intelligence Laboratory, PhD Candidate

"LDV is a really interesting intersection of technologists, researchers, large tech companies, investors and entrepreneurs. There is nothing else like this out there. People are very open to sharing and helping the community advance together." Jameson Detweiler, Fantasmo.io Co-Founder & CEO

"I've never seen a conference like this - you have pure computer vision conferences like CVPR or ICCV or you have GTC-type conferences that are based on one company's resources.  This is an interesting mix of something computer vision and entrepreneurial - it is very unique in that sense, I have never seen anything like it before. It is a lot of fun." Timnit Gebru, PhD Candidate at Stanford Artificial Intelligence Laboratory

Day 2 Fireside Chat: Albert Wenger, Partner at Union Square Ventures And Evan Nisselson, General Partner at LDV Capital  ©Robert Wright/LDV Vision Summit

Day 2 Fireside Chat: Albert Wenger, Partner at Union Square Ventures And Evan Nisselson, General Partner at LDV Capital  ©Robert Wright/LDV Vision Summit

A special thank you to Rebecca Paoletti and Serge Belongie as the summit would not exist without collaborating with them!

“Loved hearing about all the practical applications for computer vision at LDV Vision Summit. Feels like the time has finally come for amazing transformation!" Jenny Fielding, Managing Partner at TechStars

The quotes below from our community is why we created our LDV Vision Summit. We could not have succeeded without the tremendous support from all of our partners and sponsors:

Panel Day 1: Trends and Investment Opportunities in Visual Technologies Moderator: Jessi Hempel, Backchannel, Head of Editorial with Panelists: Rudina Seseri, Glasswing Ventures, Founder & Managing Partner and Rohit Makharia, GM Ventures, Sr. Investment Manager ©Robert Wright/LDV Vision Summit

Panel Day 1: Trends and Investment Opportunities in Visual Technologies
Moderator: Jessi Hempel, Backchannel, Head of Editorial with Panelists: Rudina Seseri, Glasswing Ventures, Founder & Managing Partner and Rohit Makharia, GM Ventures, Sr. Investment Manager
©Robert Wright/LDV Vision Summit

"The LDV Vision Summit is vibrant, all around me there is so much curiosity and conversation because it is the people who are working on the very edge of these new technologies. These are the conversations that are going to make everything happen and you can just feel that when you're here." Jessi Hempel, Head of Editorial at Backchannel

"My main takeaway is that there are lots of people focused on so many aspects of bringing computer vision to market. This reaffirms my belief that vision is going to play a central role in so many aspects of our lives - from enterprise to retail to autonomous vehicles, etc. The LDV Vision Summit is geeky + fun. It is a collaborative, vibrant environment that brings together a community of likeminded people with very different backgrounds." Rohit Makharia, Senior Investment Manager at GM Ventures

"The LDV Vision Summit is very unique, usually academic conferences are very research focused and business conferences are business orientated. This is a unique combination of the two and, especially in a field like computer vision, with the way that it is growing, it seems very necessary. This is a fantastic place to meet both researchers and business people." Ira Kemelmacher-Shlizerman Research Scientist at Facebook and Assistant Professor at U. Washington (Sold Dreambit to Facebook)

“The energy is amazing, everyone is curious, interested outside of their wheelhouse. Everyone wants to see what is the next big thing and what are the big things that are happening right now.” Matt Rosen, Director, Low-field MRI Lab at MGH/Martinos Center for Biomedical Imaging

“There have been a lot of very exciting discussions around visual technology and autonomous driving. It is interesting to see many different perspectives on it from sensors, from AI, from computer vision, all these different perspectives coming together. It is still a futuristic technology that we want to address and the LDV Vision Summit is great because it gathers top scientists and researchers as well as VCs to discuss how to get to that future.” Jianxiong Xiao, "ProfessorX", Founder & CEO of AutoX

"LDV Vision Summit looks at the cutting edge of all visual technology...you have a lot of brainpower in the room and you can feel the wheels turning as you watch the speakers."  Mia Tramz, Managing Editor, LIFE VR at Time Inc

"Computer vision sits at the heart of the big emerging platforms including autonomous transport, robotics, AR and AI. The LDV Summit provided a great foray into the future of computer vision and more importantly the impact it has on market sectors today through an impressive lineup of speakers, presenters, domain experts and startups." Vic Singh, Founding General Partner, Eniac Ventures

Keynote Day 1: Godmother of VR Delivers Immersive Journalism to Tell Stories That Hopefully Make a Difference and Inspire People To Care, Nonny de la Peña, Godmother of VR, Embelmatic ©Robert Wright/LDV Vision Summit

Keynote Day 1: Godmother of VR Delivers Immersive Journalism to Tell Stories That Hopefully Make a Difference and Inspire People To Care, Nonny de la Peña, Godmother of VR, Embelmatic ©Robert Wright/LDV Vision Summit

“The business sector that is going to be most disrupted by computer vision and AI in the short term is transportation, so companies like Uber, taxi companies and the entire car and automatiove industry will completely change in the next  years. The coolest thing I learned this morning was from the godmother of VR, how they are looking to change journalism and the way we capture events. The Vision Summit is pretty amazing, I am really impressed by the content, I am really glad I made it.” Clement Farabet VP of AI Infrastructure at Nvidia (Sold MADBITS to Twitter)

“We are seeing visual technologies, especially combined with AI and machine learning, disrupt a broad array of existing markets and create new ones. From the role they are playing in autonomous vehicles, to transforming marketing technologies, to the roles they are playing in physical and cyber security - and of course the role they are playing around consumer electronics and robotics. It is comforting to know everyone is just as excited as I am about computer vision and AI, and to see how big the opportunity is and how early in the cycle we are as well.” Rudina Seseri, Founder & Managing Partner of Glasswing Ventures

"My second time attending the LDV Vision Summit was even better than the first.  A great mix of accomplished technical people and energetic young entrepreneurs." Dave Touretzky, Research Professor, Computer Science at Carnegie Mellon University

"It was fascinating to see a broad range of new visual technologies. I left the Summit full of ideas for new applications." Tom Bender, Co-Founder of Dreams Media, Inc.

“The LDV Vision Summit gave me the opportunity to discover new applications of computer vision and meet leaders at the forefront of really interesting innovations and startups.” Elodie Mailliet Storm JSK Fellow in Media Innovation at Stanford.

 "This cross-pollination of all different sectors is quite unique - especially coming from an academic setting. To interact with all of these different folks from industry, research and sciences, and from media really inspires me to think about all sorts of new ideas." Bo Zhu, Postdoctoral Research Fellow at MGH/Martinos Center for Biomedical Imaging

"It was enlightening and fascinating to see the potential of the tech that's driving a visual communications revolution." Scott Lewis Photography

Panel Day 2: What’s On Now?  Moderator: Rebecca Paoletti, Cake Works, CEO with Panelists: Brian Rifkin, JW Player, Co-Founder, SVP Strategic Partnerships, Michael Downing, Tout, Founder & CEO, Orlando Lima, Viacom/VH1, VP Digital ©Robert Wright/LDV Vision Summit

Panel Day 2: What’s On Now? 
Moderator: Rebecca Paoletti, Cake Works, CEO with Panelists: Brian Rifkin, JW Player, Co-Founder, SVP Strategic Partnerships, Michael Downing, Tout, Founder & CEO, Orlando Lima, Viacom/VH1, VP Digital ©Robert Wright/LDV Vision Summit

"It is a great opportunity to meet diverse people from all different industries, a good opportunity to network with interesting talks." James Philbin, Senior Director of Computer Vision at Zoox

"If you work in visual tech, you simply can't afford to miss the LDV Summit – it's a two-day power punch of engaging talks and wicked smart attendees." Rosanna Myers, Co-Founder & CEO of Carbon Robotics

"The LDV Vision Summit is somewhere in between an academic workshop and a venture capital roundtable - it is the kind of event that didn't exist before. You have academics, researchers, grad students, professors but you also have investors, VC and angels like you've never had before. It is very high energy, the atmosphere here is fun to see the two worlds come together. From the academic side, there are grad students and other researchers who have been inside a safe bubble for a long time. They are starting to hear that visual tech are really promising and they are curious about what is going on in the entrepreneurial world and the big companies out there. This is an event where there is enough familiar content for them to feel at home but enough new content, new people, contacts and so on to go outside of their comfort zone." Serge Belongie, Professor of Computer Vision at Cornell Tech

"The LDV Summit is two curated days of outside the box ideas with the key players from diverse industries that are collectively creating the future." Brian Storm, Founder & Executive Producer at MediaStorm
 

ECVC judges Day 1 (L to R) - Aaron Hertzmann, Adobe, Principal Scientist,  Ira Kemelmacher-Shlizerman Facebook, Research Scientist, U. Washington, Assist. Professor, Andrew Zhai, Pinterest, Software Engineer, Tali Dekel, Google, Research Scientist, Yale Song, Yahoo, Senior Research Scientist, Jan Erik Solem, Mapillary, CEO & Co-founder (not pictured: Vance Bjorn, CertifID, CEO & Co-Founder, Rudina Seseri, Glasswing Ventures, Founder & Managing Partner, James Philbin, Zoox, Senior Director, Computer Vision, Josh Kopelman, First Round Capital, Managing Partner, Clement Farabet, Nvidia, VP AI Infrastructure, Adrien Treuille, Carnegie Mellon University, Assistant Professor, Serge Belongie, Cornell Tech, Professor, Manohar Paluri, Facebook, Manager, Computer Vision Group, Rohit Makharia, GM Ventures, Sr. Investment Manager) @Robert Wright/LDV Vision Summit

ECVC judges Day 1 (L to R) - Aaron Hertzmann, Adobe, Principal Scientist,  Ira Kemelmacher-Shlizerman Facebook, Research Scientist, U. Washington, Assist. Professor, Andrew Zhai, Pinterest, Software Engineer, Tali Dekel, Google, Research Scientist, Yale Song, Yahoo, Senior Research Scientist, Jan Erik Solem, Mapillary, CEO & Co-founder
(not pictured: Vance Bjorn, CertifID, CEO & Co-Founder, Rudina Seseri, Glasswing Ventures, Founder & Managing Partner, James Philbin, Zoox, Senior Director, Computer Vision, Josh Kopelman, First Round Capital, Managing Partner, Clement Farabet, Nvidia, VP AI Infrastructure, Adrien Treuille, Carnegie Mellon University, Assistant Professor, Serge Belongie, Cornell Tech, Professor, Manohar Paluri, Facebook, Manager, Computer Vision Group, Rohit Makharia, GM Ventures, Sr. Investment Manager) @Robert Wright/LDV Vision Summit

"Evan sets the tone with a lot of energy, it is pretty amazing. I am typically around a lot of engineers and it is always great to get Evan up there with his big energy - he asks you honest questions. I also spend a lot of time in the hallway because you get to meet people from other years and keep up those relationships. This is an awesome opportunity to meet the whole mix, from employers, to startup people and investors." Oscar Beijbom, Machine Learning Lead at nuTonomy

"The LDV Summit is the perfect combination of a window into the future of some of the most interesting technologies and a welcoming place to make new connections. " Tracy Chadwell, Founding Partner of 1843 Capital

"The community that is assembled here, isn't anywhere else. There's not a place where all the operators in the computer vision space are in the same place at the same time. Everybody here is capturing the electricity of whats going on inside computer vision right now and being surrounded by everybody who cares about it like you do, is really invigorating. I was just having beers with the head of Uber ATG and he's making self-driving cars, I'm never going to, but he had an optimization method that is absolutely applicable to a thing I am working on, fighting human trafficking. The cross-disciplinary nature of this group creates a lot of opportunities to learn about techniques that are absolutely applicable to your problem domain that you would never see anywhere else. If you are into computer vision this is a place you need to be every year." Rob Spectre, Brooklyn Hacker. Former VP Developer Network at Twilio

"The summit far surpassed my expectations. The bringing together of entrepreneurs, researchers, executives, and investors provided for an exchange of ideas not usually possible in other forums. I definitely recommend the summit for anyone tangentially associated with computer vision and visual technologies!" Joshua David Cotton

©Dean Meyers/Vizworld

©Dean Meyers/Vizworld

Fireside Chat Day 1: Josh Kopelman, Managing Partner of First Round Capital and Evan Nisselson, General Partner of LDV Capital @Robert Wright/LDV Vision Summit

Fireside Chat Day 1: Josh Kopelman, Managing Partner of First Round Capital and Evan Nisselson, General Partner of LDV Capital @Robert Wright/LDV Vision Summit

Keynote Day 1: How and Why Did University of Washington Professor Ira Kemelmacher-Shlizerman Build Dreambit and Sell To Facebook, Ira Kemelmacher-Shlizerman, Facebook, Research Scientist University Washington, Assist. Professor ©Robert Wright/LDV Vision Summit

Keynote Day 1: How and Why Did University of Washington Professor Ira Kemelmacher-Shlizerman Build Dreambit and Sell To Facebook, Ira Kemelmacher-Shlizerman, Facebook, Research Scientist University Washington, Assist. Professor ©Robert Wright/LDV Vision Summit

Learn more about our partners and sponsors:

Organizers:
Presented by Evan Nisselson, LDV Capital
Video Program: Rebecca Paoletti, CakeWorks, CEO
Computer Vision Program: Serge Belongie, Cornell Tech
Computer Vision Advisors: Jan Erik Solem, Mapillary; Samson Timoner, Cyclops; Luc Vincent, Lyft; Gaile Gordon, Enlighted; Alexandre Winter, Netgear; Avi Muchnick, Adobe
Universities: Cornell Tech, School of Visual Arts, International Center of Photography
Sponsors: Amazon AWS, Facebook, GumGum, JWPlayer
Media Partners: Kaptur, VizWorld, The Exponential View
Coordinators Entrepreneurial Computer Vision Challenge: Hani Altwaijry, Cornell University, Doctor of Philosophy in Computer Science, Shaojun Zhu, Rutgers University, Doctor of Philosophy Candidate in Computer Science, and Abhinav Shrivastava, Carnegie Mellon University, Doctor of Philosophy in Robotics (Vision & Perception)

AWS Activate Amazon Web Services provides startups with low cost, easy to use infrastructure needed to scale and grow any size business. Some of the world’s hottest startups including Pinterest, Instagram, and Dropbox have leveraged the power of AWS to easily get started and quickly scale.  

CakeWorks is a boutique digital video agency that launches and accelerates high-growth media businesses. Stay in the know with our weekly video insider newsletter. #videoiscake

Cornell Tech is a revolutionary model for graduate education that fuses technology with business and creative thinking. Cornell Tech brings together like-minded faculty, business leaders, tech entrepreneurs and students in a catalytic environment to produce visionary ideas grounded in significant needs that will reinvent the way we live.

Panel Day 2: Trends and Investment Opportunities in Visual Technologies Moderator: Erin Griffith, Fortune, Senior Writer with Panelists: Vic Singh, General Partner, ENIAC Ventures, Claudia Iannazzo, AlphaPrime Ventures Managing Partner & Co-Founder, Scott English, Hearst Ventures, Managing Director, Emily Becher Managing Director, Head of Samsung Next Start   ©Robert Wright/LDV Vision Summit

Panel Day 2: Trends and Investment Opportunities in Visual Technologies
Moderator: Erin Griffith, Fortune, Senior Writer with Panelists: Vic Singh, General Partner, ENIAC Ventures, Claudia Iannazzo, AlphaPrime Ventures Managing Partner & Co-Founder, Scott English, Hearst Ventures, Managing Director, Emily Becher Managing Director, Head of Samsung Next Start  
©Robert Wright/LDV Vision Summit

Facebook’s mission is to give people the power to share and make the world more open and connected. Achieving this requires constant innovation. Computer Vision researchers at Facebook invent new ways for computers to gain a higher level of understanding cued from the visual world around us. From creating visual sensors derived from digital images and videos that extract information about our environment, to further enabling Facebook services to automate visual tasks. We seek to create magical experiences for the people who use our products.

JW Player is the world’s largest network-independent video platform.  The company’s flagship product, JW Player, is live on more than 2 million sites with over 1.3 billion monthly unique viewers across all devices — OTT, mobile and desktop.  In addition to the player, the company’s services include advertising, analytics, data services, video hosting and streaming.

GumGum is a leading computer vision company with a mission to unlock the value of every online image for marketers. Its patented image-recognition technology delivers highly visible advertising campaigns to more than 400 million users as they view pictures and content across more than 2,000 premium publishers.

The International Center of Photography is the world’s leading institution dedicated to the practice and understanding of photography and the reproduced image in all its forms. Since its founding in 1974, ICP has presented more than 700 exhibitions and offered thousands of classes, providing instruction at every level.

Day 2 Keynote: 100 Million Pictures of Human Cells and Computer Vision Will Accelerate the Search for Disease Treatments Blake Borgeson, Recursion Pharmaceuticals, CTO & Co-Founder ©Robert Wright/LDV Vision Summit

Day 2 Keynote: 100 Million Pictures of Human Cells and Computer Vision Will Accelerate the Search for Disease Treatments
Blake Borgeson, Recursion Pharmaceuticals, CTO & Co-Founder
©Robert Wright/LDV Vision Summit

Kaptur is the first magazine about the photo tech space. News, research and stats along with commentaries, industry reports and deep analysis written by industry experts.

LDV Capital Investing in people around the world who are creating visual technology businesses with deep domain expertise.

Mapillary is a community-based photomapping service that covers more than just streets, providing real-time data for cities and governments at scale. With hundreds of thousands of new photos every day, Mapillary can connect images to create an immersive ground-level view of the world for users to virtually explore and to document change over time.

The MFA Photography, Video and Related Media Department at the School of Visual Arts is the premiere program for the study of Lens and Screen Arts. This program champions multimedia integration, interdisciplinary activity, and provides ever-expanding opportunities for lens-based students. 

VizWorld.com covers news and the community engaged in applied visual thinking, from innovation and design theory to technology, media and education. VizWorld is also a contributing member of the Virtual Reality/Augmented Reality Association. From the whiteboard to the latest OLED screens and HMDs, graphic recording to movie making and VR/AR/MR, VizWorld readers want to know how to put visual thinking to work and play. SHOW US your story!

AliKat Productions is a New York-based event management and marketing company: a one-stop shop for all event, marketing and promotional needs. We plan and execute high-profile, stylized, local, national and international events, specializing in unique, targeted solutions that are highly successful and sustainable. #AliKatProd

Robert Wright Photography clients include Bloomberg Markets, Budget Travel, Elle, Details, Entrepreneur, ESPN The Magazine, Fast Company, Fortune, Glamour, Inc. Men's Journal, Newsweek (the old one), Outside, People, New York Magazine, New York Times, Self, Stern, T&L, Time, W, Wall Street Journal, Happy Cyclist and more…

Prime Image Media works with clients large and small to produce high quality, professional video production. From underwater video to aerial drone shoots, and from one-minute web videos to full blown television pilots... if you want it produced, they can do it.

We are a family affair! Serge, August, Kirstine and Emilia Belongie along with Evan Nisselson celebrating Timnit Gebru's win in the 2017 Entrepreneurial Computer Vision Challenge. See you next year! #carpediem ©Robert Wright/LDV Vision Summit

We are a family affair! Serge, August, Kirstine and Emilia Belongie along with Evan Nisselson celebrating Timnit Gebru's win in the 2017 Entrepreneurial Computer Vision Challenge. See you next year! #carpediem
©Robert Wright/LDV Vision Summit

Computer Vision Delivers Contextual And Emotionally Relevant Brand Messages

The power of object recognition and the transformative effect of deep learning to analyze scenes and parse content can have a lot of impact in advertising. At the 2016 Annual LDV Vision Summit, Ken Weiner CTO at GumGum told us about the impact of image recognition and computer vision in online advertising.

The 2017 Annual Vision Summit is this week, May 24 &25, in NYC. Come see new speakers discuss the intersection of business and visual tech.

I’m going to talk a little bit about advertising and computer vision and how they go together for us at GumGum. Digital images are basically showing up everywhere you look. You see them when you're reading editorial content. You see them when you're looking at your social feeds. They just can't be avoided these days. GumGum has basically built a platform with computer vision engineers that tries to identify a lot of information about the images that we come across online. We try to do object detection. We look for logos. We detect brand safety, sentiment analysis, all those types of things. We basically want to learn as much as we can about digital photos and images for the benefit of advertisers and marketers.

The question is: what value do marketers get from having this information? Well, for one thing, if you're a brand, you really want to know: how are users out there engaging with your brand? We look at the fire hose of social feeds. We would look for, for example, at brand logos. In this example, Monster Energy drink wants to find all the images out there where their drink appears in the photo. You have to remember about 80% of the photos out there might have no textual information that’s going to identify the fact that Monster is involved in this photo, but they are. You really need computer vision in order to understand that.

Why do they do that? They want to look at how people engage with them. They want to look at how people are engaging with their competitors. They may want to just understand what is changing over time. What are maybe some associations with their brand that they didn't know about that might come up. For example, what if they start finding out that Monster Energy drinks are appearing in all these mountain biking photos or something? That might give them a clue that they should go out and sponsor a cycling competition. The other thing they can find out with this is who are their main brand ambassadors and influencers out there. Tools like this give them a chance to connect with those people.


What makes [in-image] even more powerful is if you can connect the brand message with that image in a very contextual way and tap into the emotion that somebody’s experiencing when they’re looking at a photo.

-Ken Weiner


Another product that’s been very successful for us is something we call in-image advertising. We came up with this kind of unit about eight years ago. It was really invented to combat what people call banner blindness, which is the notion that, out on a web page, you start to learn to ignore the ads that are showing at the top and the side of the page. If you were to place brand messages right in line with content that people are actively engaged with, you have a much better chance of reaching the consumer. What makes it even more powerful is if you can connect the brand message with that image in a very contextual way and tap into the emotion that somebody’s experiencing when they’re looking at a photo. Just the placement alone for an ad like this receives 10x the performance of traditional advertising because it’s something that a user pays attention to.

Obviously, we can build a big database of information about images and be able to contextually place ads like this, but sometimes situations will come from advertisers that won’t be able to draw upon our existing knowledge. We’ll have to go out and develop custom technology for them. For example, L’Oréal wanted to advertise a product for hair coloring. They asked us if we could identify every image out on different websites and identify the color of the hair of the people in the images so that they could strategically target the products that go along with those hair colors. We ran this campaign from them. They were really, really happy with it.

They liked it so much that they came back to us, and they said, “We had such a good experience with that. Now we want you to go out and find people that have bold lips,” which was a rather strange notion for us. Our computer vision engineers came up with a way to segment the lips, figure out, “What does boldness mean?” Loral was very happy. They ran a lipstick campaign on these types of images.

A couple years ago, we had a very interesting in-image campaign that I think might be the first time that the actual content that you're viewing became part of the advertising creative. What we did is, for Lifetime TV, they wanted to advertise the TV series, Witches of East End. We looked for photos where people were facing forward. When we encountered those photos, we dynamically overlaid green witch eyes onto these people. It gives people the notion that they become a little witchy for a few seconds. Then that collapses and becomes a traditional in-image ad where somebody can then, after being interested by the eyes, can go ahead and click on this to watch a Video LightBox to see the preview for the show.

I just thought this was one of the most interesting ad campaigns I’ve ever seen because it mixes the notion of content and creative into one. What’s coming after this? Naturally, this will extend into video. TV networks are already training you to look at information in the lower third of the screen. It’s only natural that this will get replaced by contextual advertising the same way we’ve done it for images online.

Another thing that I think is coming soon is the ability to really annotate specific products and items inside images at scale. People have tried to do this using crowdsourcing in the past, but it’s just too expensive. When you're looking at millions of images a day like we do, you really need information to come in a more automated way. There’s been a lot of talk about AR. Obviously, advertising’s going to have to fit into this in some way or another. It may be a local direct response advertiser. You're walking down the street. Someone gives you a coupon for McDonald’s. Maybe it’ll be a brand advertiser. You see a car accident, and they’re going to remind you that you need to get car insurance.

Lastly, I wanted to pose the idea of in-hologram ads that I think could come in the future if these things like Siri and Alexa … Now they’re voice, but in the future, who knows? They might be 3D images living in your living room, and advertisers are going to want a way to basically put their name on those holograms. Thank you very much.

Get your tickets now to the next Annual LDV Vision Summit.

Get Ready to See More 3D Selfies in Your Facebook Feed

+20160525_LDV_1549.jpg

Alban Denoyel, CEO and Co-Founder of Sketchfab spoke at the 3rd Annual LDV Vision Summit in 2016 about 3D content and 3D Ecosystems and its impact on virtual reality.

At the 2017 Annual Vision Summit this week, we will be expanding upon the conversation with new speakers in the AR, VR and content creation spaces. Check out the agenda for more.

 

As Co-Founder and CEO of Sketchfab, I'm going to talk about User Generated Content in a volumetric era. The VR headsets are all hitting the market today and tomorrow it's going the be the AR headsets, and we're starting to see holographic devices. And the big question is, of course, the content. What content are we going to consume with all this hardware?

If you look at your content today, I put it in two brackets. One is to germinate content like the Henry movie by Oculus. It's really great. There are two issues with that. One is that it takes time to make and the other one is that it takes money. And the result is that there is very little studio-made VR content. And if you go to the Oculus store today, you'll see that for yourself.

And the other bracket of content, is user-generated content. And it has to be the bulk of your content. It has to be user generated. And today, user generated content for VR is mostly 360 video.

We live in a 3D world as you all know and we have six degrees of freedom. I can walk in a space in real life and VR is able to recreate the same thing and this is what we need to get a real sense of presence. And the advanced VR headsets are able have positional tracking, and let's you walk inside a space in all freedom. And so, which content is going to be able to serve this ultimate VR promise.

The good news is that we're entering an era of 3D creation for all thanks to two trends. One is much easier tools to create 3D content. I think the most iconic example of that is Minecraft. Maybe you don't think of it as a 3D creation tool, but hundreds of 3D creations coming from Minecraft on Sketchfab. Just by assembling small cubes you are able to build entire walls and then you can navigate into them in VR.

Another great example is Tilt Brush that let's you make VR content in VR, I don't know if you tried it, but it’s really fascinating. You create in VR and then you're able to revisit that in VR.

© Robert Wright/LDV Vision Summit

The second mega trend for 3D creation is 3D Capture and it is really fascinating to see how it has evolved over the past five years. The most famous project is maybe Project Tango by Google. They are shipping their first phone with a 3D sensor this summer with Lenovo. And also, if you look at the events on the Apple side, they bought PrimeSense three or four years ago.  PrimeSense was a company making the Kinect and all this points to our future iPhone with 3D camera. And the day we have an iPhone with a 3D camera you'll be able to capture spaces and people in 3D. And if you look at how we've captured the world, we started with drawing and then we started taking pictures and then we started taking videos. But as we live in a 3D world, 3D capture is going to be the next way we capture the things.

And so, here is an example with my son, William. I make a 3D portrait of him every month. I took it with just a phone so it’s hard to show a 3D file on a 2D screen. And it’s not dancing yet, but I also have dancing versions of him.

3D capture is super important but being able to distribute this content is equally important. When it comes to user generated content you have to share it online and help it travel across the web. And so that's what we do at Sketchfab, we're a platform to host and distribute 3D files. And with technologies like WebGL and WebVR we are able to browse this content in VR straight from a browser. And a pretty good example of that is we are natively supported in Facebook, which means that I can share this 3D portrait of my son, William, in a Facebook post and then prompt a VR view straight from my Facebook feed just from the browser without having to go to a store and to install crazy setup.

One area where user generated 3D content is really booming is around cultural heritage. A lot of museums are starting to digitize their collections in 3D.  But also a lot of normal people, when they go to the museums they're just starting to take pictures from various angles of statues and then publishing it on the web. They're very interesting initiatives that started like two years ago and are still happening is around what happened in Syria when ISIS started destroying art and museums there were lot of people on the internet started crowdsourcing the reconstruction in 3D of places like that. So here's an example of a temple in Palmyra that was preserved forever in a digital format.

Another very interesting vertical to me is documenting world events. Now, with this technology we're able to see 3D data from an event pretty much the day it happens. It really gives a new perspective to an event that is super interesting. On the left, you can see Kathmandu just after the terrible earthquake that happened last summer. The day it happened a guy flew a drone over Kathmandu and then generated a 3D map from it, then published it on Sketchfab. And you were able the same day to walk through the devastated Kathmandu in VR just from the web. That was pretty fascinating. And then on the right, super different, is the memorial that happened the day of Prince's death. People started putting flowers and guitars in front of a concert place and a guy just made a 3D picture and it’s a great way to document this place and this event.

3D capture is all areas of content and we are starting to see the same trends as what we saw on Instagram. People shooting their things, their food, their faces, so I think you can get ready to see more and more 3D selfies in Facebook news feeds.

Don't miss our 4th Annual LDV Vision Summit May 24 & 25 at the SVA Theatre in NYC.

Image Recognition Will Empower Autonomous Decision Making

Rudina Seseri, Founder & Managing Partner of Glasswing Ventures

Rudina Seseri, Founder & Managing Partner of Glasswing Ventures

Rudina Seseri is the Founder & Managing Partner of Glasswing Ventures. With over 14 years of investing and transactional experience, she has led technology investments and acquisitions in startup companies in the fields of robotics, Internet of Things (IoT), SaaS marketing technologies and digital media.

Rudina will be sharing her knowledge on trends and investment opportunities in visual technologies as a panelist and startup competition judge at the 2017 Annual LDV Vision Summit. We asked her some questions this week about her experience investing in visual tech and what she is looking forward to at the Vision Summit...

You are investing in Artificial Intelligence (AI) businesses which analyze various types of visual data. In your perspective, what are the most important types of visual data for artificial intelligence to succeed and why?
Nowadays, a key constraint for AI to succeed in perception tasks are good (i.e. labeled) datasets. Deep learning has allowed us to achieve "super-human" performance in some tasks, and computer vision is a key pioneering area - from LeCun's OCR in the 90s, to the new wave of AI excitement spurred by Andrew Ng – and others – in the unsupervised tagging of YouTube videos and deep nets performance in ILSVRC competition (an annual image recognition competition which uses a massive database of labeled images). 

Image recognition has now moved from single object labeling, to segment labeling and full scene transcription. Video has also seen impressive results.  An important next step will be to see how we can move from perception tasks like image recognition, to autonomous decision making. The results achieved already in games, and self-driving cars are promising. One can think of applications in just about anything from autonomous vehicles, visual search, (visual) business intelligence, social media, visual diagnostics, entertainment, etc. However, I think the most important thing for success is to be able to match the type of data and algorithm to whichever problem you're trying to solve. The ability to create valuable datasets in new use cases will be essential for startups.


I believe AI and vision will have a massive impact across sectors and industries which is why we decided to launch [Glasswing Ventures].

-Rudina Seseri


What business sector do you believe will be most disrupted by computer vision and AI?
That’s a tough one because I believe AI and vision will have a massive impact across sectors and industries which is why we decided to launch the firm. From a vision point of view, we need to ask which are the business sectors that rely (or could rely) the most on images, and those are likely to be the ones "most disrupted" by AI.  Within the enterprise, marketing and retail are likely to be one of the earliest adopters. In terms of sectors, it's easy to see the impact that AI will have on e-commerce, transportation, healthcare diagnostics, security etc.
 
You are speaking and judging at our LDV Vision Summit. What are you most excited about?
The LDV Vision Summit is a key event for anyone involved in computer vision. Being a speaker and a judge, sharing the stage with some of the pioneers in the domain and hearing the pitches of some of the most promising entrepreneurs in the area. Being able to spend two days with all of you and discuss trends and the future of computer vision is invaluable. 

You’ve said “the skillset of data scientists will be rendered useless in 12-18 months. They will need to either evolve with new AI tools or become a new category of Machine Language Scientists.” How does this rapid evolution in AI impact your investing strategy?
Data science is indeed evolving at a very fast pace. The exponential improvement in computing power, the ability of GPUs to parallelize data processing (crucial for CNNs), and the sheer abundance of data available, has required data scientists to rethink how they can better leverage these capabilities and experiment with what was previously unthinkable. While most of the algorithms considered state-of-the-art today have been developed over decades, the way in which data scientists use them, has changed considerably - i.e. moving from feature engineering to architecture engineering.

Additionally, the community has fully embraced open-source, with most breakthroughs being published and algorithms shared. This means that savvy data scientists have to: know the advantages and limitations of each approach for their use case given the new computing/data constraints; be willing to experiment with new methods and embrace open-source while being able to build a sustainable competitive advantage; and be on top of the new developments in their area.

Finally, the emergence of data science at the center of AI development has created a new, major stakeholder in product teams (along with engineering & PM) so a good dynamic between these three teams, with constant collaboration to push the limits of technology, while always focusing on creating a product that delivers superior value vs status quo to their target customer is key.

This is the last week to get your discount tickets for the 4th Annual LDV Vision Summit which is featuring fantastic speakers like Rudina. Register now before ticket prices go up!

The Perfect Storm For Computer Vision To Reform Life Sciences

Judy Robinett, Founder & President of JRobinett Enterprises

Judy Robinett, Founder & President of JRobinett Enterprises

Judy Robinett is known for her titanium rolodex. She speaks and consults with professionals, entrepreneurs, and businesses on the topics of strategic networking, relationship capital, startup funding strategy, strategic alliances, and leadership. Her book, "How to be a Power Connector" was selected as the #1 Business Book for 2014 by Inc. Magazine. In her more than 30 years of experience as an entrepreneur and corporate leader, Judy has served as the CEO of public and private companies, in management positions at Fortune 500 companies, and on the advisory boards of top tier venture firms. 

We are excited to have Judy as a judge at our 2017 LDV Vision Summit startup competition. Evan caught up with her in this last week before the Vision Summit about her experience investing in startups and the things she is looking forward to at the Vision Summit...

You see many startup pitches - what are the three most important reasons you would choose to invest in a team?
Character rules. While on first blush there are many solid deals with purported upsides, my first focus is on character.  Howard Stevenson who is known as the lion of entrepreneurism at Harvard once told me that the first time someone was untruthful, he walked knowing he would lose all his money regardless of how promising the deal was.  Howard invested in well over 90+ startups and his book, “Winning Angels, the 7 Fundamentals of early stage Investing,” is a must read.

Besides dishonesty, I try to avoid bad actors. Life is too long to deal with the dark triad; narcistic, Machiavellian and sociopaths.  I want to make sure these are folks I like, trust and want to work with for the long haul.

Top questions I ask myself; does the team exhibit emotional IQ, how do they deal with conflict? Litigation is emotionally and financially draining so better to determine ahead of time what happens if there is a blowup.

Second, does the team show the ability to learn and pivot. Are they coachable? How do they deal with feedback? It’s a rare biz model that doesn’t morph.

Finally, vision is grand but do they have a customer.  Paul Graham of Y-Combinator said there are only two reasons startups fail.  First is lack of a customer and second is lack of funding.  A few months ago a founder called me after spending $12M to build out a platform only to discover no one wanted it. 

Judy Robinett, LDV Community dinner May 2016 © Robert Wright

Judy Robinett, LDV Community dinner May 2016 © Robert Wright

What business sector do you believe will be most disrupted by computer vision?
I’d bet on life sciences.  Over 25 years ago scientists tried to use AI to diagnose cancer but failed.  With Intel’s new 10 nm chip out later this year, we will find ourselves in a perfect storm—processing power, adoption across sectors globally, more investments, projected 150M users of consumer AI. We will be able to solve problems we couldn’t before.  Think of  IBM’s Watson who now has read every single research paper available on cancer, tracks all clinical trials and then imagine what will happen when pictures are added.
 
Future surgeons will be able to perform simulated operations.  We will see better diagnostics, improved quality care and lower prices.
 
At the White House Fintech summit, many in the audience were startled to see the adoption of roboadvisors like Betterment uses for financial advice.  The same arguments were used a few years ago that this industry needed to have ‘high touch and high trust’ that new technology wouldn’t work.

What was your biggest mistake as an investor and what did you learn from it?
I got a $100K brick to the head when I fell in love with the product and drank the Kool-Aid from a charismatic founder with a big idea. My big take-away was that I needed to step back, do the math, think hard about the assumptions and then do a gut check; then and only then, would I invest.
 
Now I have Jack Welch’s quote, “Get Better Reality,” posted on my wall.

We are honored to have you judge the 2017 LDV Vision Summit startup competition. What are you most excited to see at our Summit?
I would love to see folks figuring out motion sickness, cancer diagnosis, next-gen headgear and MRIs, expanded content but honestly, I want to be surprised!

Accenture Technology global trends now lists AI as the number one trend saying, “ AI will be the new UI”. 

I’ve been watching new players from Russia—Ntech Labs with facial recognition to Boston’s emotion AI, Affectiva which now has over 5 million facial videos.  I’ve seen the narrative change from those dissing this technology to proclaiming it will be the next industrial revolution.

At the Vision Summit, I'm interested in seeing how speakers from companies like Upskill, PathAI, OrCam and AutoX showcase how they are utilizing AI and computer vision to revolutionize their industries as well.

You wrote a great book called “How to Be a Power Connector” What was your goal in writing this book? Do you have one exciting success story after writing this book that you can share?When I was CEO of a small public biotech I frequently spoke at BIO.  I grew frustrated that promising drugs were falling to the wayside because people couldn’t connect the dots to critical resources needed for success.   With 7.4billion people on earth, $369 trillion in global private wealth projected by 2019 by Credit Suisse and countless ideas, information said to double daily with IOT, there is no lack of resources.  But most people are in the wrong room talking to the wrong people asking that Old Testament Job question—why me?
 
I wanted to change that by providing a clear, easy-to-follow roadmap with the latest research on strategy and relationships.  Nothing happens without people.
 
Three months after McGraw-Hill published my book, I heard from a young man in Africa who had applied my principles and obtained funding.  I did the happy dance.

Last chance to get your discount tickets to the 4th Annual Vision Summit, now through May 19!

Nicolas Pinto Predicted the Deep Learning Tsunami but Not the Velocity - Selling to Apple Was His Answer

Nicolas Pinto, Deep Learning Lead at Apple © Robert Wright/LDV Vision Summit

Nicolas Pinto, Deep Learning Lead at Apple © Robert Wright/LDV Vision Summit

Nicolas Pinto is focused on mobile deep learning at Apple. At the 2016 LDV Vision Summit he spoke about his journey as a neuroscientist and artificial intelligence researcher, turned entrepreneur who sold his company to Apple.

Discount tickets are available until May 19th for the 2017 Annual LDV Vision Summit to hear from other amazing researchers & entrepreneurs about their route to selling a successful business.

Good morning everyone. My name is Nico and I have ten minutes to talk to you about the past ten years. When I went from being a neuroscientist at MIT and Harvard then moved to create a self deep learning startup in the Silicon Valley and finally sell it to Apple. Lets go back in 2006, about 10 years ago. That's when I started to work with neuroscientist Jim DiCarlo at MIT and David Cox at Harvard. They had a very interesting approach. They both wanted to do reverse and forward engineering of the brain in the same lab. Usually, these things would be done in different labs, but they wanted to do both in the same lab. I thought the approach was very interesting. They really wanted to do, to study natural systems of real brains, the system that works and also build artificial systems at scale. Scale that approach at natural systems, so really big scale.

What do we see when we study natural systems? Let me go very quickly here. The first thing is we see when we study the visual cortex, we are focusing on vision obviously, is that it's basically a deep neural network, composed of many layers and that kind of share similar properties. Properties that are kind of similar, repeated patterns. A lot of these properties have been described in literature. Many, many, many, many studies from the sixties have described these things in the physiology literature so you can look at them. What other things we saw is like, if you look at the modeling literature, there are many, many, many, many different ideas about how these things could be working, starting in the sixties.

I think this will work really well.

Many, many, many, many, many different studies, many different models, many different ideas and parameters. Many ideas and ultimately culminating in convolutional neural networks. That's probably what you've heard of. This convolutional networks have been popularized by Yann LeCun in the machinery community but also by Tommy Poggio in the computational neuroscience community with the HMAX model. All these models cannot work the same, cannot look very different. You don't really know and what's very interesting about them is that they have very specific details and some of them have to do with learning, some of them have to do with architecture. It's really hard to make sense of all that in terms of learning aspects, there are many ways you can do learning. Many, many, many, many, many different ideas about how you can do learning starting in the sixties and moving on from computational neuroscience into machine learning. Many different ideas. I'm not going to go into it but like so many different years it's so hard to explore this particular space.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

What we saw back in the days that the hypothesis space of all these different ideas, and you know a combination of all of them, was overwhelming to explore. As a graduate student, you're looking at all of this, they all kind of make sense, kind of not, you're not really sure how to combine them. The space was largely unexplored. If you just take one particular idea, for example here, you will see that one particular idea has many, many, many, many, many different parameters depicted in red here. These parameters are many, like, many, you have a lot of them and you have a lot of models and it's very, very overwhelming. Again, here, for deep learning, so many parameters. How do you set those parameters?

The usual formula is that you take one grad student in a given lab. You just take one particular model, usually the model will be derived from the particular lab and the size will be limited by runtime. At the time everyone was running in MATLAB. You tweak all different parameters by hand, one by one, all by hand, and you hopefully can crush a few benchmarks. You hope that you can get this kind of work published and you claim success. Don't forget, at the end of all of this, you get one Ph.D.

But if you tweak all of these different parameters by hand and one by one, not really knowing what you want to do, it's a little bit like what some people will call graduate student descent. Just taking one grad student and kind of exploring this space slowly at a time. That's very aggravating and very, very boring.

We wanted to do something a little bit differently. We would take one grad student, that would be me in this case. But we wanted to do is test many, many, many, many different ideas and take big models, big models at kind of scale and approach the scale of natural systems. Hopefully we could crush a few benchmarks as well. Maybe we could even get that published and hopefully get one Ph.D at the end of it.


If you want to have really good ideas, it's fairly simple. You just need to get many, many, many, many different ideas and just throw the bad ones away.

-Nico Pinto


The inspiration, I got it from this guy, Linus Pauling, double Nobel prize winner. It told me, well it told everyone, if you want to have really good ideas, it's fairly simple. You just need to get many, many, many, many different ideas and just throw the bad ones away. Very simple. In biology we are told to do that. It's called high-throughput screening. Very face to fancy name and it's a very beautiful technique that cannot imitate natural selection. Let me show you how it works in biology.

What you do is you plate a diversity of organisms, like you're looking for some organism that will have some properties you're looking for. You allow them to grow and interact with the environment. You apply some sort of challenge to the properties you're looking for. You collect the surviving colonies and ultimately you study and repeat until you find an organism that fits the bill. In biology inspired computer vision, you'll do the same thing. You'll generate a bunch of random models from these many, many different ideas. You know, some from you, some from the literature, apply some sort of learning to learn the synaptic ways and interact with the environment, test with a screening task. A particular vision task is, in this case, skim off the best models, study, repeat, and validate on other tasks and hopefully get the proper things you're looking for, a really good visual system.

What's nice about this particular technique is that even though we call it high-throughput screening, it's basically just brute force. Right? I mean it's a very nice name, we like that as scientists, but it's just pure brute force. In this particular case we needed to have a lot of compute power to run all of these models. But we were back in 2006 and this is basically at a time where these GPUs is very cheap graphics processing units, started to be very, very powerful and actually programmable. We got lucky because we found that trend very early on. Basically ten years ago. The problem with this is that it took a while in the beginning because it was quite complicated to program. We had to build everything from scratch. The computers, the software stack, the programming, it was a lot of fun but quite hard at the beginning since there was no library for it. We even went as far as building a cluster of precision trees back in 2007 to get this raw brute force power that we needed for these particular experiments.

We also got access to hundreds of GPUs because at the time national computing, supercomputing centers were building a lot of these supercomputers with tons of GPUs but not many people knew how to use them properly so we had access to all of them because they wanted to get used. With a brute force approach we can do that. We also taught these things back in 2008 and 2010 at Harvard and MIT how to use these GPUs to be able to do more computational science cheaply with these graphical units.

Let me skip forward. We basically applied this technique, came up with a model that is all encompassing, has tons of parameters and can now compress all these different ideas as I just mentioned. Apply our brute force technique and hopefully at the end of the day we basically got very good results and we we were very surprised. The results that we got were so surprising that even got featured in Science in 2010. Not only we were surprised, but even Yann LeCun himself was surprised. He told us that some of this work was actually influential in such a way that we uncovered some very important non-linearities using this kitchen sink approach.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Since we had some very interesting kind of results, we wanted to see if we could apply this to the real world. We compared of technology to a commercial system called face.com, got bought by Facebook. We were able to crush their performance. We even got in touch with Google back in 2011 and they told us that we were basically kind of influencing a little bit of their work, the early Google brainwork back in 2008.

We decided to start a company based on this and this, the company was called Perceptio. It was a very early startup, you probably won't see much information about it. The goal of Perceptio was basically to come up with some brain inspired A.I. that you can trust. Trust was very important to us. We wanted to make sure we preserved the privacy of users.

Why a startup? Well, what we saw is that we wanted to obtain nice progress but we saw that academia and industry were kind of optimizing for progress but kind of not. On one side you had academia where we would be a credit economy. In a credit economy, what you do is you plan flags and you guard territory. It's all about me, me, me first. You don't really know what's going on, you just have to plant flags, that's how you get a career. In industry it's a profit economy. You have to make money and all of time what that means they're selling user data. We wanted to kind of create a new organization that would not be operating like this. We had grandiose ideas like many others. It didn't work out in the end but we got this grandiose idea of starting this intersection of incubator, industry lab, academic lab, focusing on progress only. It didn't work out but that's what we wanted to do.

The applications that we were focusing on was a small social camera and our moats, our competitive advantage was going mobile first. Everyone was going in the cloud, we wanted to bet against the cloud and go mobile first. Everyone was surprised at the time. It was 2012. Everyone was running deep learning in hundreds of thousands of CPU cores or big GPUs. I'd say, "Why would you even try to do this? It's not even possible." If you look at it carefully, it cannot be done on the computer side. Not much compute going on. But ultimately we did it and a lot of the things that we uncovered back in 2012 are now being rediscovered by the community. Some people claimed that we could not do it because we would not see enough data. Well it turns out if you're the most popular camera in the world, you get to see a lot more data than the cloud. This camera, if you sit right next to the sensor, you can get dozens of frames per second and only a fraction of that will go in the cloud.

People got it now. We could preserve privacy. Ultimately, we were able to predict the timing of the deep learning tsunami but not it's velocity. We had to scale with this small company and the only way for us to scale was to go to an acquisition. The problem with the acquisition is that most of the companies will go about this profit economy by selling user data so it was really hard for us to find the right home for our technology and scale. We thought very hard and we kind of found a little small fruit company back in Cupertino that has, you know, think very differently. They think different about these things and they really do care about the user's privacy and not selling your user data. That's where I am right now. That's where Perceptio is right now and that's it. This is like the end of the ten minutes, ten years of my life. Thank you very much.

Get your tickets now to the next LDV Vision Summit to see other phenomenal speakers with stories like Nico's.

Creating Compelling VR & AR Experiences at LIFE VR

Mia Tramz, Managing Editor of LIFE VR at Time, Inc.

Mia Tramz, Managing Editor of LIFE VR at Time, Inc.

Mia Tramz is the Managing Editor of LIFE VR at Time. At the 4th Annual LDV Vision Summit, Mia will be giving a keynote on "Pioneering VR & AR for Media Companies." Evan had a chance to catch up with her in May about the virtual reality and augmented reality projects she is producing at LIFE VR and where she thinks virtual reality will go in the next 5-10 years.

Discount tickets still available for the next LDV Vision Summit May 24 & 25 in NYC.

Please share with us how and why you evolved from a more traditional photo editing career into multimedia and now the Managing Editor of LIFE VR.
Immersive, non-traditional storytelling has always been of interest to me and even as a photo editor on TIME.com I was looking to push the boundaries of the way in which we tell stories visually. In 2014, about a year after I had been hired, I produced TIME’s first underwater 360 video, Deep Dive, with Fabien Cousteau. That project set me on a path towards VR – soon thereafter I started researching how to produce a VR experience for TIME. Around the same time, LIFE VR – Time Inc’s company wide VR initiative – was approved and they were looking for ideas to put into production. I worked with several of TIME’s editors and reporters to come up with a list of experiences we could produce that year – which ended up being about ten pages long. I think when they saw my early enthusiasm for the medium and how much leg work I had done, they felt I’d be the right person to launch the brand.

Deep Dive with Fabien Cousteau - Courtesy of TIME

Deep Dive with Fabien Cousteau - Courtesy of TIME

What is your biggest challenge in creating VR and AR content today?
I’ve found the challenges for creating AR and VR to be quite different. 

With VR the biggest challenge is getting proper resources behind the projects I feel are most important to produce. These are often ambitious, moon shot projects with price tags to match. Raising capital to make sure those projects are properly produced is no small feat. I’ve been very lucky to have had early support from Time Inc and our brands in both creating and promoting VR and 360 video; outside of our company, we've been fortunate to have support from partners such as AMD and HTC on past projects such as Remembering Pearl Harbor. We’ve also been lucky to work with supportive production partners who have, in many ways, made it possible to achieve a very high quality of storytelling.

With AR, the biggest challenge is producing content that isn’t a gimmick – it needs to have inherent value for the consumer so that activating it isn’t a chore. It should feel delightful and compelling, and the user should feel that they got a return that was worth their time and effort, just like with any digital content. There’s many ways to implement AR, both editorially and for advertising clients. With the AR camera now available in the LIFE VR app that we launched last week with the Capturing Everest issue of Sports Illustrated, we can launch 2D and 360 video content as well as 3D CGI animation and graphics off of both our print products and pretty much any other physical object. We can also make the pages of our magazines, including advertisements, shopable. Parsing out the most impactful way to implement AR throughout our brands – that serves both our editorial and sales teams – will be an exciting challenge in the months to come.

Remembering Pearl Harbor - Courtesy of LIFE VR

Remembering Pearl Harbor - Courtesy of LIFE VR

I am sure you see many different story opportunities to publish in VR. How do choose the best stories to produce VR today? Can you give a couple examples?
At this early stage, it’s a lot to ask a consumer to download our app, find a headset and then dedicate time to watching the experiences we create. My guiding principle has been that any experience we produce or distribute has to be compelling enough that a consumer would go to all those lengths to be able to watch it – and that it delivers once they’ve invested the time and energy. Beyond that, an important part of my job is finding unique ways to bring the DNA of LIFE Magazine to the work we do. So, for example, LIFE covered the attack on Pearl Harbor extensively; when we were looking into historical events to recreate, it was a moment in history that LIFE – and TIME – could speak to authoritatively and something that we could weave LIFE imagery and reporting into. With Capturing Everest, the VR and AR project we just launched with Sports Illustrated, LIFE famously covered Sir Edmund Hillary and Tenzing Norgay’s first ascent of the summit with iconic photography and written reporting. Tackling the first bottom-to-top climb of Everest in VR allowed us to bring the spirit of that storytelling to a new audience, and a new generation.

Capturing Everest © LIFE VR & Sports Illustrated

Capturing Everest © LIFE VR & Sports Illustrated

Which business sector do you believe will be most disrupted by VR and why?
At the moment I see VR augmenting and supplementing business sectors versus disrupting. If you look at education, it’s an incredibly powerful learning tool that enhances the curriculum teachers already have in place; it’s been a great tool for the military and medical fields for decades; when applied to film making and journalism, creators now have the option to weigh it against other more traditional methods of covering a story such as photo or video. In each case, I see VR becoming another tool in the tool box, not necessarily a disrupter or replacement. 

Depending on how quickly facial recognition techniques evolve, I could see video conferences perhaps eventually being replaced by VR or AR conferences. The gaming industry may present the biggest question mark – but again it seems like VR will be a great option among many others, not necessarily a replacement for existing gaming consoles.

What excites you about speaking at our LDV Vision Summit?
Sharing of information is such a key part of the development of any industry or innovation, especially in its early stages – I’m a big believer in collaboration and the ‘all ships rise’ approach. What we are able to inspire in and learn from each other will shape the future of AR, VR and MR as much as what we are able to invent and discover on our own. Getting to share what I’ve learned and to learn from others is the most exciting part of participating in the summit.

What is not possible in VR today that you hope will be possible in 5-10 years?
The headsets themselves right can be limiting. Implementation of inside-out tracking and accommodating for AR and MR in addition to VR are all innovations that seem to be on the way that I think will support both user adoption and content creation. 

The realistic rendering of human faces and registering of emotions in CG is also still a huge challenge and prohibitively expensive in most cases. Companies like 8i have developed methods for volumetrically capturing living people and their technology is improving day by day; incorporating some AI seems to be a necessary next step. When it comes to people who are no longer with us – or who never existed in the first place – rendering a face, emotions and responses that are believable is a big challenge. 

To experience LIFE VR, download the LIFE VR app for iOS and Android; visit the LIFE VR channel on Samsung VR; or visit time.com/lifevr. Certain experiences are also available on Steam, Viveport and in the Oculus Store.

reCaptcha Fights Spam And Sparked The Burger vs. Sandwich Debate

Ying Liu, Software Engineer at Google ©Robert Wright/LDV Vision Summit

Ying Liu, Software Engineer at Google ©Robert Wright/LDV Vision Summit

Ying Liu is a Software Engineer at Google and was part of the team that created reCaptcha nine years ago. reCaptcha has changed a lot since then and there are some really exciting initiatives she is working on at Google that she told us about at the 3rd Annual LDV Vision Summit in 2016.

Tickets for the upcoming Vision Summit are on sale - get yours now!

reCaptcha is an anti-abuse tool. Our mission is to keep the internet organic, green, free of spam abuse. Earlier I was talking to someone in the group, trying to describe what reCaptcha is. At first he didn't get it, so I started verbally describing this reCaptcha, suddenly he got it and said "oh, it's that annoying thing on the internet". First of all it's very sad to hear you associating that with reCaptcha's brand. I hope by the end of the session, I will change your mind on this.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

This is reCaptcha 7, 9 years ago. What we did at that time, is we distorted synthetic text so that computers can not read it but humans still can. As computer vision improves, OCRs are getting better, and machines are getting really good at recognizing this kind of distortion. As a result, we have to change the distortion harder and harder until it looks like this 3 years ago. I'm going to give you a second, try to transcribe what it says but don't blame me if it hurts your eye.

What we did is say - let's test this on humans. See how humans can solve them. And for the known humans, only one-third of them can recognize this. And then we were saying "Okay, how about machines?" Computer vision's getting really good. So we trained these hard captchas on the advanced machine learning system inside Google. And guess what? They can solve it at 99.8 % accuracy. The whole game changed around. Now reCaptcha is easy for bots, for machines, and hard for humans. That's the time we know that we have to totally change the game in order to get back into the game.

This is what we launched a year and a half ago in late 2014. This is the new reCaptcha experience. We call it "No captcha reCaptcha".

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Here's how it works. You are presented with a check box. It's a checkbox where it's labeled "I'm not a robot". What you do as a user is click on the checkbox to prove to us - reCaptcha - that you're indeed a human. If we can verify that you're a human, you'll come back with this. But the story is not that simple. In the back end, we have an advanced risk analysis system that based on your click and several interactions with us, we can pre-classify you. Between a spectrum of human and bots. If we think you're a human, a green check is returned automatically. If you're a bot - we tell you you're a bot- we reject you right on the spot.

For every other case where we're not so sure or we think that you're kind of suspicious, we give you different captcha challenges. Here I'm just explaining two examples.

The one on the left is a 3 by 3 grid of natural images. Where you as a user is to select all the common objects among them. The one on the right is harder. It's actually given one picture and you're asked to localize where exactly that object is. As of today in 2016, this is still considered a difficult task for an advanced AI. I know earlier in today's session, people were saying "oh the image recognition is a solved problem." Well unfortunately, it's not solved to us. Until that we have some off-the-counter solution that says "I can recognize any object in the world".

We launched a year and a half ago. How did this new captcha experience do? I'm going to share some of the insights.

In the past one and a half years, we grew our footprint over the internet. Now we have over a million 7-day active clients. And the captcha widget that we're showing, another robot, speaks 56 languages and is covering 240 countries and regions. Everyday we receive hundreds of millions of captcha solutions. Among all the correct solutions, roughly one third of them are coming from the "no Captcha" experience. NoCaptcha is defined as the more direct pass with solving a visual test.

Our mission is to keep the internet free of spam abuse. To do that, we can not drive humans away. To improve the humans usability of reCaptcha has always been our top priority. So in View 1, because of the pre-classification that I was talking about earlier, in View 1 we can serve them much easier tasks if we pre-classify them as humans. That means it's easier text distortions and we're getting a 89% pass rate. Pass rate here is defined as the total number pass solutions over total number solutions. In View 2, that's getting much better. The pass rate increases to 96% which means for the remaining 4% of humans, you can always try again.

Solving captcha has been much faster and faster for human users. Again, in View 1 because of the text distortion, you have to type in through a keyboard, which is particularly cumbersome for mobile users. In View 2, that becomes two mouse clicks or even screen touches. By doing that, we shorten the solving time of a captcha to almost a half. That's a few seconds we've saved the internet users for every captcha solving. Cumulatively, that is 50,000 hours that we save the internet every day. 50,000 hours, that is almost 6 man years. This is a lot of time that you could watch cats and dogs videos online on YouTube rather than solving captchas.

Captchas is getting easier for human users. Here, we're showing some stats from the bots analysis. For the pre-classified bots, we give them much harder captchas. Here we're showing a significant attrition rate for bots. The blue bar is how many times they click on "I'm not a robot" and the red bar is how many times they actually attempt to solve a captcha. As you can see, only 5% of the clicks leads into a solution. And for the remaining 95% of the bots, they basically abandon the experience and walk away. We tried the same thing on human users. Is it because it's a hard captcha and people walk away? It's strange now, for human users more than 90% of the time, they actually try to attempt to solve the captcha.

This is the overall pass rate we observed globally from reCaptcha view one. Here is colored coded as red being a low pass rate; meaning most of the solutions failed. Green is high pass rate; most of them succeeded. As we move into the view two experience, the noCaptcha experience, the map is turning into a land of green. This is a very good thing for all internet users. Because whenever you encounter a reCaptcha View 2 on the internet, you're most likely to solve it correctly. Unless you're a bot, in which case you're going to walk away.

To recap what I said just now, reCaptcha is getting easier and faster for human users and getting harder for bots. But this is not the end of our story. The other part I want to share with you is how reCaptcha is helping to improve and to make humanity better.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

When we started reCaptcha 7, 9 years ago, it was an anti-abuse tool but most importantly it's also to help to digitize books. So remember in view one, we're showing two words. One is a text distortion where you're using to verify that you're human. The other word is actually coming from a book scan, so it's a book word. If you answer the verification word correctly, we also think that you transcribe the book word correctly. So, in doing so, we have transcribed millions of books.

After the books digitization, we tried reCaptcha on street number and street names transcription. Here we gathered the largest image training that is online. And we have donated a significant chunk of it to the open research community. This is helping us to build a better maps experience and more accurate maps for the whole internet users.

You can pretty much guess what I'm trying to say here. As we move into the View 2, we're showing natural images for labeling. We're gathering the internet intelligence to help teach machines and making AI smarter.

We're also celebrating holidays with internet users. Here are two example pictures from new years captchas.

The other thing that I didn't show here - as I was talking to some of you during the break - there are some funny things happening at reCaptcha. We started the biggest debate on the internet about what is a burger, what is a sandwich. People love to argue about those things. Or is a cupcake a cake? Those kind of discussions. So doing those, we want a lot of internet love for reCaptcha.

To conclude, my whole talk, reCaptcha is making continuous effort to fight spam on the internet. We're making the internet a better experience for all human users. We're also pushing the boundaries of research and making AI smarter.

“Killing Google With My Bare Hands” and Other Lessons Learned in Scaling with Jack Levin

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Jack Levin was an early employee at Google and later started ImageShack, and is most recently at Nventify. In his keynote at the 2016 Annual LDV Vision Summit he spoke about his perspective going back to the early days of Google up to today. Tickets on sale now for the upcoming Annual LDV Vision Summit.

I was privileged to be at Google from 1999 to 2005, essentially in the beginning of my career. I was twenty three or twenty four years old when I started.

I'm here to talk about scale. What it means, how to actually scale large systems, and not essentially burn yourself and your infrastructure out. So after Google, I did spend some time running a company called ImageShack doing about two billion web hits a day, which was a pretty good scale story in itself.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

So that picture is actually a rack that I built myself. It happens to be in a computer history museum in Mountain View. You can see that little label there, "JJ 17". I put that label there, it has some of my blood DNA because, as you can see, this thing is a mess and I would often scratch my hands on that network card right there. So that was my first day at Google, essentially went into the data center, had to wire all of this, bring it online, and a week later we needed to launch netscape.com, which was the first really big client for Google.

Three hundred servers, but then the next week it was plus another two thousand, which was crazy. I had no idea what I was supposed to be doing, Larry Page said, "Hey, here's a bunch of cables, plug them in, you're good to go." That was the story for the first couple years, a few years forward, that's when I stopped going to the Data Center because it was a team of twenty five people managing all of this.

So that's the second or maybe third generation of uber-racks. Tons of hardware, a lot of kilowatts being consumed, a lot of heat being generated, but a lot of queries being served.

So let's talk a little bit about disasters. I claim responsibility for killing Google with my bare hands a couple times. We didn't know what we were doing, I clearly didn't know what I was doing, I would push the wrong button, Google would go down. I would jump on my motorcycle or scooter at the time, and literally run through the data center, unplug all the power supplies, plug them back in to wipe out my configs, and everything would be back. That happened a few times until I figured out, "Perhaps I should have a dial-up to a data center so I can dial in and undo my work."

But that comes with experience. We had a team of really smart people when it came to development, but we had no clue in operations. I had no idea what I was doing, the guys who were hired were mostly IT on the corporate side. One of the biggest problems for startups that need to scale up quickly is that they are not IT people, they are not operations people - the Founders that is. They hire guys to run their data centers, but nobody knows what they're doing, or nobody knows where the pin points are, so on and so forth.

For the longest time at Google, we didn't know what might kill Google. We had a bunch of monitoring, but we didn't know how to interpret it. Back in the day, about the year 2000, the way you would kill Google is that you would send it a query, something like, "theological silhouette". The words have no meaning when combined together, but Google would search all the way to the bottom of an index. If you sent five queries per second from your laptop, you could actually kill the whole Google search engine.

That was an interesting thing, and we learned that monitoring of queries and spam detection is really important.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

That's actually one of my favorite slides. It's not specifically about Twitter, but you know that back in the day Twitter would go down all the time, you know. What's going on, why is it always down? The interesting thing about Twitter is that it's not the language - Ruby isn't bad, Python isn't bad. What it is, Twitter kind of, should I say, “flew away” from the company that tried to build it. They just go so popular so quickly and it was very difficult to scale. Sometimes it takes luck, persistence, people working more than nine to five, twenty four hours several days just to get things up and running in the right kind of way.

Eighty percent of the time, you just don't get it right. This is mostly about the operations teams. A lot of these startups, when they hire their ops teams, they don't claim responsibility and, more often than not, it's because they're a disenfranchised group. Most of the people who call the shots are their founders, and operations folks are just trying to run things, but aren't really on the forefront of the company business.


Sometimes it takes luck, persistence, people working more than nine to five, twenty four hours several days just to get things up and running in the right kind of way.


Postmortems are very important. When things break, you do need to talk about them and you need to have your peers discuss them, but more often than not you need to think about the future. What can possibly go wrong? That's a premortem. So premortem can help you to envision the possibility of different kinds of disasters that you generally don't think about. If you don't think about, then likely it's not going to work for you.

In the early years of Google, I had no backout plan. It's not that I like to live dangerously, I just really didn't know what I was doing. A backout plan is very important, right now at Nventify, the second company that I co-founded, it's very important for me to ask my engineers, "So you're going to make all those changes, do we know how to roll back?" Usually the answer I would get is, "Well, I know what I'm doing." More often than not, that's not the case and you need a backout plan.

How to scale. So that quad-copter actually seen at sea is pretty great, great picture of scale which actually does work. So when do you scale, and how? Interestingly, most of the blocks you need are already available on GitHub. Just go to GitHub, get your building blocks downloaded and tried by your engineers, there's no point rebuilding the whole thing. Especially if it's not your core business. So the company should really be focusing on the core business, and not necessarily inventing the new building blocks.

Surprisingly, you're a better engineer if you know how to use Google. You can find a lot of things have already been solved. So using Google is a skill that every engineer should have.

That's a very important slide here. So NIH stands for Not Invented Here. A lot of bigger companies, when they scale up, usually have one of the people who say, "Hey, we never want to buy anything. We don't want to get anything that's open source. We want to create everything locally so we know exactly what is going on with our libraries." That's often a fallacy, it actually is expensive and slows down your progress and ability to deliver product to end users.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

I want to spend a few minutes talking about the future. Clearly, it's cheaper and cheaper to store your files. Likely all of the great companies that you see on this screen are competing against each other in the future, likely within five years the cost of storage will go down to zero and you will end up paying for something else.

The way I see that, especially with Google efforts of delivering Fiber and satellite connections everywhere -- and so is Facebook -- it's very likely that everybody is going to have free internet and essentially be plugged in. That's an interesting concept. So we talk about storage nodes and we talked about it at Google as well. Likely what we're going to see in between five to fifteen years are storage nodes that are actually self aware of driven by conditions and some by AI. This way if you go on the plane and have your data with you, it won't be on your phone but it might be following you from the terminal into the plane, load up on the plane, and all of your movies and what you have are right there.

I call it AI Powered Peer-to-Peer Storage. I know, it's kind of cool. There's more and more interesting technology being developed when it comes to consuming data, specifically I've seen some interesting car windshield glasses, there's talk at Google about contact lenses that can create a VR feeling right in your eyes rather than wearing anything but contact lenses. This is Google Glass, currently we use text to query for things and find things. It's very likely that, if it isn't Google Glass it'll be something similar, where the visual information will be used for searches.

This is maybe twenty to twenty five years from now, where advanced technology will give people the ability to record all of your experience from your visual cortex, your feeling of whatever you're touching, and eventually share this data between humans. It's not telepathy, but more like close-range communication mind-to-mind which will be possible with technology.

Augmented reality and VR will likely merge, and we're likely to see an absence of keyboards and just using our minds and hands to manipulate and interact with data.

Discount tickets available for the upcoming LDV Vision Summit until May 15.

VR and Mixed Reality Platforms Are a Paradigm Shift in Storytelling

Heather Raikes, Creative Director of 8ninths ©Robert Wright/LDV Vision Summit

Heather Raikes, Creative Director of 8ninths ©Robert Wright/LDV Vision Summit

Join us at the next annual LDV Vision Summit in NYC. Early bird tickets are on sale until April 30.  

Heather Raikes, is the Creative Director of 8ninths, and she spoke at our 2016 LDV Vision Summit about design patterns for evolving storytelling through virtual mixed reality technologies.

Storytelling is in our DNA, it's part of what makes us human. How we tell our stories shapes our culture, deeply affects how we understand ourselves and each other and engage with the world around us. I'd like to start with a macro view of some archetypal patterns that underscore the fundamentals of storytelling and contextual its evolution through emerging technologies.

The core construct of traditional storytelling is the linear narrative. The ancient art of the storyteller could be used as a starting point. Sitting around a campfire, an audience gathers usually in a circle and listens to the stories and songs of the storyteller. The temporal format is linear and continuous, and the storyteller is a clear and singular focal point for the experience. Theater offers an audience a more immersive experience of a story. Stagecraft evokes the narrative world. The audience identifies with actors portraying the story characters. The temporal experience is still linear and continuous, but the focal point is expanded from a single storyteller to the world of the stage.

The focal point is further expanded in film. The story is told from a montage of different perspectives. Temporal engagement is still linear, but the focal point shifts continuously and dramatically within the world of the screen. In television, the story becomes discontinuous and episodic. The focal point mimics film in the form of the montage within the world of the screen, but temporally the audience engages and disengages at will.

A more significant shift comes in the transition from analog to digital storytelling. Native digital storytelling is participatory and interactive, disrupting many of the tenets of classical storytelling. In gaming, the audience essentially becomes the protagonist, and their actions unfold the action of the story, which is experienced from a first person perspective.

When you follow someone on social media, you are a live witness to their story, which has no clear ending and is told from an infinite number of discrete focal points derived from their journey through life. You are presumably contributing your story to this forum as well. There becomes a merging between the story you are witnessing, the story you are telling, and the story you are living.


In virtual reality, the story you are experiencing or witnessing becomes perceptually indistinguishable from your reality. You are completely immersed in a virtual world, and on some level your neurosensory processing system believes that it is reality.

-Heather Raikes, Creative Director of 8ninths


The next paradigm shift in storytelling is currently arriving with the onset of virtual and mixed reality platforms. In virtual reality, the story you are experiencing or witnessing becomes perceptually indistinguishable from your reality. You are completely immersed in a virtual world, and on some level your neurosensory processing system believes that it is reality. Comparably but differently, in mixed reality the story integrates seamlessly with your physical environment and your immediate perceptual framework, again, merging your reality with the world of the story.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

I'm currently creative director 8ninths, a virtual and mixed reality development studio based in Seattle. We're working in this space and applying VR and MR technologies not just to entertainment stories but also to business. We've found that design patterns are an important compass for our team, for our partners and clients, and for the community at large in figuring out what to make creatively of this brave new space. I'm going to give you a tip of the iceberg roadmap of our starting points in thinking about developing for these emerging media.

Virtual reality is currently in its launch phase and is presenting a spectrum of platforms ranging from high end desktop room scale VR with physical tracking systems, to mobile VR platforms, to affordable contraptions that you can snap any mobile phone into. Some of the design patterns that we're currently exploring and developing for VR include visual grammars for 360 storytelling, temporal structures and story rhythms that are native to VR, world to world transition techniques, spherical user interface design, spatialized audio-video composition techniques, virtual embodiment and iconographic representation of physical presence in virtual spaces, and syntax for virtual collaboration.

Mixed reality is currently pre-launch. Developer editions of Microsoft HoloLens and Meta are just starting to ship, and Magic Leap is still pre-developer release. 8ninths was one of seven companies selected worldwide to be part of an early access developer program for HoloLens, and we've been working with HoloLens since last fall. We did a major project with Citibank in their Innovation Lab exploring expanding information-based workflow into mixed reality. As part of that process, we created a document called the HoloLens Design Patterns that breaks down and looks at core building blocks of early holographic computing experiences.

In the interest of time, this is a five-minute talk, this story is really just beginning to unfold. That is a sincere statement. It's a really exciting time in history. We will continue to publish virtual and mixed reality design patterns at this URL. We invite you to be part of the conversation. Thank you.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Computer Vision Will Dramatically Affect The Travel Industry: People don't have to "go there" to "be there"

2016 ©Robert Wright/LDV Vision Summit

2016 ©Robert Wright/LDV Vision Summit

Brian Cohen is the Chairman of NY Angels, the founding partner of NY Venture Partners and the author of An Angel Insider Reveals How Entrepreneurs Can Raise Smart Money For Their Billion Dollar Idea. At the 3rd Annual LDV Vision Summit in 2016, he was a judge for the Startup Competition. Evan had a chance to catch up with him in April about the extensive investment opportunities leveraging visual technologies.

Discount tickets still available for the next LDV Vision Summit May 24 & 25 in NYC.

You see many startup pitches as Chairman of NY Angels and via your fund NY Venture Partners. What are the three most important reasons you choose to invest in a team?

  1. I love it when I see a team fully in control of what they're doing. Raising money is an important part of their business. If they understand the key principles of the process and fully engage with their investors it's a good sign that they can run their business well.
  2. Have they taken the time to fully engage with their early customers - listening, learning and continuing to better understand them? Do they know what they're thinking not just what they're doing
  3. Are they realistic about what they can achieve? Are their initial goals those that can be highly leveraged so that they pave a stronger and faster path to a more successful future

You have invested in several visual technology businesses including being one of the early investors in Pinterest. Why did you invest in them? 
Frankly, I'm a very visual person. I tend to see words as images and create a more intimate sense of a relationship to the world by doing this. In a world of hyper speed it's pretty clear that people must communicate through visual ideas and not rely on long winded tangled phrase.

LDV Community Dinner, July 2015  ©Robert Wright

LDV Community Dinner, July 2015  ©Robert Wright

What business sector do you believe will be most disrupted by computer vision?
It's extremely broad. Anywhere we need to see and experience things more clearly and faster will be affected.  The travel industry will be dramatically affected because people don't have to "go there" to "be there". We will have immersion rooms that create a new reality of place. 

What was your biggest mistake as an investor and what did you learn from it?
Very early on I began to look for reasons to do a deal not for reasons not to do a deal. It's easy to see the challenges and be fearful. It's a lot harder to recognize and believe in the clarity of the big vision and see and understand why something will work. 
 

2016 ©Robert Wright/LDV Vision Summit

2016 ©Robert Wright/LDV Vision Summit

You were a judge for our LDV Vision Summit 2016 startup competition. What did you learn during our last Summit? Who do you think should attend our next LDV Vision Summit May 24 & 25? 
I learned that we're really just at the very early stages of applying visual communications to solving problems. Anyone trying to better solve their customer's problems needs a better appreciation for how visual communications can be leveraged. If you are building a business with any visual data, computer vision and artificial intelligence then you should definitely should attend the annual LDV Vision Summit in May. 


You wrote a great book called “What Every Angel Investor Wants You to Know: An Insider Reveals How to Get Smart Funding for Your Billion Dollar Idea.” What was your goal in writing this book? Do you have a success story after writing this book that you can share?
Writing the book was an opportunity to more fully describe my personal path to successful angel investing. I wanted people to know that there are best practices that should be followed. Unfortunately, most angel investors and entrepreneurs really don't understand one another. My goal was for them to be more aligned and supportive of one another. That leads to much better outcomes.

I have had many more founders contact me as a result of writing back. They appreciate the warm words but strong convictions of running a smart fundraising process.  
 

Kelly Hoey and Brian Cohen at our LDV Community Dinner, July 2015  ©Robert Wright  

Kelly Hoey and Brian Cohen at our LDV Community Dinner, July 2015  ©Robert Wright
 

In 2003 They Called Evan Crazy - He Said Phones Would Replace Point & Shoot Cameras

Evan has been working in the visual tech industry for over 20 years. In 2003 he wrote that camera phones will be taking the place of point-and-shoot cameras and everyone thought he was crazy.

Learn how visual technologies will change business and society in the future at our 2017 Annual LDV Vision Summit. Early bird tickets on sale until April 30.

Why Will Wireless Camera Phones Revolutionize the Photography Industry?

Original story posted in The Digital Journalist in May 2003 by Evan Nisselson. 

The digital screen in front of me says that it’s 3:32 AM London time; I am 38,000 ft above Greenland on my taxi between NYC and Milan. My laptop is plugged into the airline power system, I am about to order another whiskey, and I wish they had free Wi-Fi Internet access on the airplane like yesterday outside in Bryant Park, NYC.

For many years I carried a Nikon F around every day so I was ready to capture the next ‘Love the Living of Life’ moment to be able to share with others. However, in 1999 I stopped carrying my camera because it was additional weight in my bag. It wasn’t being used often enough to justify carrying it along with the laptop, PDA, cell phone, and associated cords.

A packed bag is the norm for working photographers but my reality was not for working in the field but rather trying to develop new digital imaging solutions. It wasn’t a lack of interest to make pictures but my mind was focused elsewhere.

However, every couple of weeks I would get frustrated that I missed a moment that I would have normally captured with my camera and communicated with others.

Now, the problem is solved, I have a cell phone camera.

Photography means many things to many people at many times. It’s a means of communicating at its core. People use photos to visually communicate with others about a vacation, a bike ride, a news event, a celebrity, or about your “totaled” car to the insurance company. The process of visually communicating is in for a drastic shift due to the arrival of cell phone cameras.

Professional photographers and consumers around the world have finally started to realize the benefits of making pictures digitally but it’s not going to compare to how wireless photography will revolutionize how people make, share, sell and communicate with pictures.

Nowadays, people around the world don’t leave their homes without their designer cell phones unlike with cameras. Professional photographers usually carry their cameras around everyday but there are times even they leave the house without their camera – but all take their cellular phones.

I had been waiting to get my first cell phone camera for years. Professional photographers say they are overwhelmed with the tools of their trade which need to be totted around like; a laptop, PDA, 2-4 camera bodies, lenses, flashes, film, batteries, smart cards, power cords, adaptors, satellite phones, back up hard drives, etc.

Carrying a combined cellular phone and camera is a totally different mindset than meandering around making pictures with a typical camera.

Even after buying my new cell camera, I continued to get frustrated that I was missing moments worth capturing because I didn’t have my camera and then at the last possible second - I would realize that I could grab my cell camera and capture the moment. Now not only could I capture the moment but I can also instantly share that moment with someone around the globe by sending it as a MMS within seconds. Once again I am ready to make pictures wherever I go as I used to do with my analog camera.

On the way to the office the other day, I was making pictures with my cell camera. I came across a war protest that was about to happen… I could have had the scoop but was late for a meeting so I couldn’t wait around. It will definitely be easier for the masses to capture news events when photographers are not present but the quality and legitimacy of their photos will always be an issue.

There are many people that I frequently brainstorm with about the present and future of digital imaging for consumers, professionals and businesses. Below are some of my more recent conversations that add perspective to how cell phone camera devices will revolutionize the photography industry.

Bob Goldstein and I were talking the other day on the phone between Milan and Los Angeles about his research on how digital imaging is revolutionizing visual communications:

BOB
One of the things that we’re saying is once you have something like [a cell camera] that’s with you all the time – you’ll end up reaching for it, instead of a pencil or a pen to write something down, instead of reaching for a keyboard to type something out.

EVAN
Oh, absolutely. I’ve already done it twice. I’ve told you before. I definitely think the cell phone camera is going to revolutionize digital imaging for the consumer.

BOB
I think for everybody.

EVAN
Well, yeah, you’re right, also for professional photographers and business folks. But when I think of the photographers out there, I think of the professional photojournalist, and then I think of the consumers making pictures. But you’re right. As far as visual communications, the business opportunities are tremendous. So I got back to the office today and I was in a meeting, and there was a blackboard or a whiteboard all scribbled. And I said, well, we can’t erase this because I don’t know if the person has copied it down. So I made a photo of it.

BOB
There you go....

EVAN
… Saturday, when I was walking around the center near the Duomo in Milan, there were, I thought, a lot of people, seven to ten people that were doing the same thing with their cell phones that I was doing. And they all had three or four people looking around it after making pictures for instant gratification. So I downloaded all my camera photos, like a hundred of them, or eighty of them, and put them on my computer, but I kept the three that I really liked on the cell and shared them in the office on the phone. Everyone loved sharing those photos digitally. So the quality is horrible, but the concept that it will be at one megapixel and two megapixel and maybe even three by the end of this year proves the fact that the quality is just a matter of time. And the user experience is the big question – well, how does it do it? I was trying to figure out if people noticed me making pictures or was I a spy. Italians, I think, already have an idea that these cameras exist and they kind of don’t see it as awkward.

BOB
But they’re seeing it as a regular camera at that point.

EVAN
Absolutely. Because the sole reason that anybody makes a picture, whether a professional, consumer or business is to communicate. That’s the sole reason; to communicate, share that communication, save a memory, and document history, which is just another communication for a later date.

BOB
And I’ll tell you something. When you said the quality was horrible, I’ve got to tell you – and part of it is I’m used to the palm-cam and all the rest of it – but in terms of you sending me a little what we would call a modern snapshot, several of the pictures, especially the first one that I looked at of the guy and the dog, I mean, the quality was completely acceptable considering, first of all, that it’s first generation, but also considering what it is. You’re sending me your impression of a moment on the street. And that was completely transmitted to me. I didn’t look at it and wonder what it was; I looked at it and went, wow.

EVAN
Thanks. Yeah, yeah. No, I’ve showed it to many people, and they’re like, all the other ones are cute, but that’s a great photo…once I saw it, I knew I nailed it.

BOB
Right. So even with all the limitations, that still, that first impression, because you wielded it so well- I’ll tell you what I think also is happening, especially – I mean, this has been going on for a long time, but I think it’s really going on big time now – is in terms of quality expectations. We’re looking at hours of footage on TV now that looks like somebody puked on the lens and is underwater. (Exactly.) And so people are grateful for any kind of image. And again, in looking at yours [photo], I thought the quality was – considering that it’s still a, what, a 640-by-480 image – was astoundingly good. And as you say, rightfully so, within the year we’re going to have one megapixel cameras and up.

image003.jpg

I am often talking about technology before it becomes commonplace and that is either a curse, blessing, insight or maybe all of the above. This time it’s no joke – wireless photography and more importantly, cell phone cameras are going to revolutionize how consumers and professional photographers make, share, distribute, sell and communicate with photos.

It has been fantastic to experience the transition from analog to digital photography in the last 10 years. In 1993 I was transmitting digital photos to SABA’s agents around the world with an ISDN line, 24 Hours in Cyberspace in ’95, then I created the first Internet broadband photography portal in ’97, today it’s cell cameras and tomorrow it will probably be photo blogs, personalized digital distribution and new marketing solutions for photographers - but that’ll have to wait until future articles.

The other day I was discussing with David Friend via email how cell phones with digital cameras will revolutionize photography and he wrote the following, which I totally agree with:

“What ARE the myriad applications? Sales people in the field connecting to the home office; photography scouts scouting shoot locations; many things Polaroid’s are used for now; grandkids/kids connecting with faraway parents/grandparents; disasters & breaking news events shot by local citizenry, EMS workers, etc....I think it's just a few steps away from Dick Tracy wrist-video phones.”

There are many examples of how businesses are using wireless photography to do their job better, faster and to save money. Insurance companies are sending people into the field to make and instantly transmit accident photos to corporate headquarters. The other day someone told me a story of a copier mechanic talking on a walky talkie to his office because he couldn’t figure out which plug he should adjust – they kept on going back and forth but the descriptions were not accurate enough. All of a sudden someone offered their cell camera to make a picture of the machine, asked for the office email address, emailed the photos from the cell and within moments the office said ‘WOW’ adjust that dirty red cord next to the blue cord to fix the copier.

The image quality of these cell cameras is 640 X 480 at best but we should have three megapixel cell cameras on the market within 18 months. Anyone who is complaining about the quality of today’s cameras are not focusing on the critical technological and cultural advances that are knocking at our door. Quality is a minor issue today and will be solved in time, as professional digital cameras are now good enough for publishing high quality books.

Additionally, most consumers tell endless stories and share tons of laughter from photos that are barely legible. These cell cameras will not replace professional cameras but they will be another tool just a like a web site, a wide-angle lens or analog film.

Can you tell which of the following photo strips were made with my cell camera?

I was trading emails with a photography friend and he was a bit outraged with my email that the BBC was asking their online audience to submit photos from cell phones, digital and analog cameras from Iraq war protests around the world. I thought this was a fantastic way for the BBC to develop a more global interactive online community but he took the following different perspective:

Now, now--it's quite rosy from your perspective but the trend amongst the media is transparent; cut costs and maximize profits, fatten up for our next merger, nothing else matters. I'm rather surprised that you don't recognize the BBC link as an exploration of potential free content.

In the future, I believe I will need to convince the editors and AD’s that they need something more than a chimp with a cellular camera, or I'm doomed to compete against the pizza-faced kids supplementing their part-time jobs at McDonalds! That kind of photo--or snapshot, whatever, from a cellular camera--will be so commonplace that there will be no budget for it.

I agree that the BBC probably had different agendas from saving money to interactive programming but I strongly believe that photographers shouldn’t feel threatened by consumer photographers because the creative eye and skills of most professional photographers are far superior. Publishers will always need them to succeed. 
Professional photographers already compete with the public when it comes to photographing news events because publishers often publish public news photos. Cell cameras might make it easier for the public to make news worthy pictures.

I finally arrived in Heathrow at 9AM only to have a 5-hour lay over before my flight to Milan but thank goodness for the business center so that I could re-connect my veins to an Internet IV.

I just had an interesting chat via instant messenger with Damon Kiesow who is a Sr. Photo Editor working the early morning News shift today at America Online in Virginia.

Cell cameras will revolutionize the photography industry because they can instantly share photos from your camera to people around the world! The other day, a friend said that she had hundreds of photos that no one has seen because she is too busy with two young kids to even think about printing, uploading, and emailing the photos. The reality is that digital photography hasn’t made it easier to visually communicate, yet... The photography industry is in the first inning of the first game of a long competitive season.

Many industry analysts are forecasting that camera phones will outsell "standalone" digital cameras in the next couple of years!

Cell cameras will only successfully revolutionize the photography industry if it is simple to make and share photos with these new devices. Additionally, today’s pricing models with the wireless companies have to become cheaper to get everyone addicted like I am. It is so much fun to make pictures with my cell phone and then instantly share them with others around the world.

If the above is not enough to convince you that we are on the precipice of another major transition in the photography industry then the following two stories might help.

After arriving in Milan, I picked up my cell phone camera that I wasn’t able to use in the states because of Italian pre-paid service issues. Within one hour of using my cell camera again in Italy, I had sent 5 new photos as mms messages to friends around the world showing that I was back in Italy. Cell cameras are addictive!

I was at the Blue Note Jazz Hall in Milan and I wanted to share the moment with a friend in California who is a Jazz musician. I made a photo with my cell camera, recorded a couple seconds of audio from the show and then sent the photo, audio and a short text as a MMS minutes later.

Everyone at my table and the neighboring table were amazed, jealous and asked in multiple languages what cell phone camera they should buy! In conclusion, carpe diem and figure out how you should leverage wireless camera devices in order to either visually communicate easier and/or increase revenue for your photography business.

[All photos made with the Sony P800 Cell Camera except one photo strip]

An Image is Really Hundreds of Data Points That Tell Us Who We Are

Anastasia Leng, CEO of Picasso Labs © Robert Wright/LDV Vision Summit

Anastasia Leng, CEO of Picasso Labs © Robert Wright/LDV Vision Summit

Join us at the next annual LDV Vision Summit in NYC. Early bird tickets are on sale until April 30.

This is transcript of the keynote by Anastasia Leng, CEO of Picasso Labs, from our 2016 LDV Vision Summit.

Thank you. Hi, everyone. My name is Anastasia Leng and I am a former Googler using technology to measure creativity. Now, before you decide if that statement alone makes you love me or hate me, let me tell you a little about how we're putting science into something that has traditionally been an art.

As many of you know, human brains process visual information sixty thousand times faster than we process text. But the result of that speed is that we walk away with a very subjective understanding of an image. I like this image, or I don't, this is a good image or a bad image. We've all probably sat in a meeting discussing a photo with someone who is more senior than us who says, I don't know why, but I just don't like this image, and there's nothing we can do about it.

Technology is very different; technology looks at an image objectively. It gives you the most comprehensive set of metadata contained within an image. With that data, thanks to image recognition technology, we've done things like build visual search, we've done things like build content recommendation systems.

What if we could use this data to understand how your user's reactions change based on the content of your image? What if we could use this data to better understand how their perception and their interaction with your brand changes, and the why behind image performance?

What if we could use this data to measure and optimize your visual voice at scale. Now, this in itself is not really a new concept. Psychologists, have been toying around with this for years. They've been looking at how visual stimuli change users behavior, perceptions, and ultimately their reactions to a brand or a different environment.

For example, in the late 1990's, the government of Scotland decided that they wanted to reduce crime rate, especially at night. At the same time, the government of Japan wanted to reduce suicide rates in train stations where people were jumping in front of the tracks. They installed blue lighting, which is meant to have a calming effect. They saw a 9% reduction in crime. 

Pharmaceutical companies, big pharma, has been accused of using color psychology to influence their trials thereby heightening placebo effect. Consumers, or their trial participants, rate red pills as having more of a stimulant effect and blue pills as having more of a depressant effect. Now this varies by gender and this varies by country, it is very culture specific, but there is some reaction that's real, we react to visual stimuli very differently. As a fun fact, one of the common complaints about Viagra in the U.S. is that the pill is blue and it doesn't match up with the reaction that consumers expect it to have.
 

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Now brands have known this and they've used this anecdotally one-off. If you open your phone now and look at any of the food apps on your phone chances are they will be red, or orange; that is not an accident; porn's the same way, it's not an accident - it is because they want you to be impulsive. If you look on the left side you'll see a bunch of brands that are blue. Those brands want you to give them your money or your data and what they're saying is, we are safe, we are trustworthy. If you're going for global domination, this rainbow effect here in the bottom seems to be the lucky ticker.

But it really is about so much more than just color and to illustrate this I want to tell you guys about a personal anecdote. So, I'm a freak about EV testing, which has bled a bit into my personal life, and when I was fundraising money last August and pitching my first clients, I started testing it out, experimenting with how the way I looked impacted my conversion rate at a meeting.

I started to look at whether investors' or clients' reactions changed based on whether I wore my glasses or not. This is nowhere near statistically significant, right? But the reality is, I now wear glasses to every investor meeting because I saw that my conversion rate was higher. While this is nowhere near statistically significant, what it does tell you and what we all fundamentally believe is the way we look, the things we wear, impacts the way people react to us.

If I was trying to optimize for conversion rate and dating, it'd be the other way around. The question is, and actually this is probably very context dependent, but if this is the reality, why wouldn't this be the same thing for brands?


Brands spend billions of dollars creating millions of images. Those images contain trillions of data points about consumer's actual revealed preferences about the visual content within those images. But brands have no idea how to harness this data.


Brands spend billions of dollars creating millions of images. Those images contain trillions of data points about consumer's actual revealed preferences about the visual content within those images. But brands have no idea how to harness this data. And this is what Picasso Labs does, we give you very specific performance insights to help you understand the “why” behind image performance and help you better understand who your audience is and how different parts of your audience respond to different visual content.

Now our technology is never going to tell you something like always use red on Instagram or you know, blondes are always better in your display campaigns; what we believe is audience react differently to different visuals based on who is the brand behind them so we do very personalized image recognition and machine learning to understand what is about a specific audience reaction to your brand that causes an impact in behavior.

As a result of the way we work, a lot of the insights we gather we can't really talk about because they are seen as competitive advantage and very proprietary to the brand but we have been working with a number of luxury fashion companies who've let us expose a little bit of the data. So a few months ago, we were working around fashion week with companies who wanted to understand what type of image style worked best. This is luxury fashion companies on Instagram. What they were measuring was increase in engagement. Now engagement on Instagram is likes, comments, etc. I'm gonna let you guys guess and I've started you off with an easy one, which image style, for luxury fashion brands right, I'm talking Chanel, Louis Vuitton, Prada, etc., gets the highest engagement on Instagram? Raise your hand if you think it's runway. Okay, a couple of hands. Raise your hand if you think it's editorial, right? What about street style? Yes, okay, told you guys this was an easy one, Street Style is absolutely right.

Now the fascinating thing here is we all knew that right, not all, most of us, most of us knew the answer here, maybe we were lucky, maybe it was an intuition. Around fashion week, most luxury fashion brands that were monitored, their engagement rates dropped. Part of that is the saturation problem, part of that we saw that even outside fashion week cycles, any runway content just seems to really drag your performance down and we've analyzed this by looking at millions of images across a bunch of fashion brands.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Now the second one, what type of model shot works best? So raise your hand if partial body face visible you think is going to be the winner here? Okay, what about partial body no face? Okay. Full body? Okay, so I think full body's got it but no one seems quite sure, this one's a bit harder and the results were really surprising. No face wins, in fact if there is anything, any takeaway that we've seen across most luxury fashion brands is cut off the face right? Which is crazy if you think about the amount of money people spend hiring models strictly on the attractiveness of their face and actually you consumers don't want to see it.

So, that's Picasso Labs, our mission is to foster creativity through data, we really believe in giving you the kind of data that helps you make smarter creative decisions so that next time you're in a meeting with someone who says "I just don't like it" you could say "Well I don't care because I have data to show that our users do".

Thanks very much.

Sturfee Launching Augmented Reality Gaming App with Investor They Met at LDV Vision Summit

Anil Cheriyadat, Founder and CEO of Sturfee © Robert Wright/LDV Vision Summit

Anil Cheriyadat, Founder and CEO of Sturfee © Robert Wright/LDV Vision Summit

Anil Cheriyadat, Founder and CEO of Sturfee was a finalist in the ECVC at the 2015 annual LDV Vision Summit. Sturfee technology use mobile cameras to recognize the real world around you then augment it for travel, gaming, entertainment etc. We caught up with Anil as Sturfee prepares to launch its Augmented Reality social application...

How have you advanced since the last LDV Vision Summit?
Since the Vision Summit we have raised US$745K as part of our initial seed round. We have two Silicon Valley investing firms along with other notable angels that took part in the funding round. We are now in the process of raising the remaining part of the seed round (US$800K). We have also expanded our team to 7 and are looking for more engineers in the areas of deep learning, geometrical computer vision, and GPU programming. 

What are the 2-3 key steps you have taken to achieve that advancement?
Turning cameras into novel interfaces through which we can transform live streets for travel, gaming, and enterprise applications has disruptive potential. The problem was quite interesting from AR, AI, and Robotics perspective. The approach we took was unique. We studied the problem well and put our solution through different conditions. Being the team of engineers who have worked closely before helped us to advance quickly.

Our move to San Francisco in 2015 was also key, (before that we were at Oak Ridge National Laboratory in TN), it allowed us to be close the “mothership.”

What project(s)/work is your focus right now?
Two things are in line right now, (i) we are laser focused on bringing the technology as a social consumer product for a focused group. Technology can convert physical places into entertainment zones - shoot a 10 sec video of Empire State building, place a basketball hoop on the building, and challenge all your friends to beat you! (ii) With our technology we are addressing the fundamental AR problem – estimating 3D measurements through camera lens. Existing solutions were designed to operate effectively on indoors. But for outdoors the 3D measurements can be estimated from other vantage points – even from space, this is key. We are focused on advancing our street vision engine that fuses image data captured from diverse perspectives but converging to a location.

What is your proudest accomplishment over the last year?
Biggest achievement is putting together the team. We knew that the solution we have developed for solving the AR problem is unique. It required people from diverse background. Currently our team consists of members with PhD in satellite image analysis, PhD in motion analysis, Game designer, GPU engineer, Golang stack developer, Game mechanic and iOS programmer. It’s the whole package that makes it work!

Anil Cheriyadat, Founder and CEO of Sturfee © Robert Wright/LDV Vision Summit

Anil Cheriyadat, Founder and CEO of Sturfee © Robert Wright/LDV Vision Summit

What was a key challenge you had to overcome to accomplish that? How did you overcome it?
Building the team when you don’t have money is not easy. But at Sturfee we were able to bring people together at early stages. The key to this achievement was clearly defining the problem we are solving and illustrating the vast disruptive potential ranging from travel, wearable, and robotics.

Dr. Harini Sridharan, who is now the CTO and Co-Founder joined Sturfee at the early days. We have been working on this for a long while now. The first IEEE workshop on “Computer Vision for Converging Perspectives” that I co-chaired as part of the 2013 International Conference on Computer Vision was the starting point. As the team gained more understanding of the solution it became clear to everyone that we are onto something. This was the key motivation that is pushing us forward.

What are you looking to achieve in 2017?
With our first product we plan to give people the power to generate new form of AR pictures and videos – imagine throwing a digital ball into a live or street scene and it bounces around, hits the incoming car, and files off. Streets are now game scenes! Every user with a phone can convert live streets into game arenas. We will empower people to turn streets into magic zones. We are aware that transitioning from technology to product is not an easy step. We have been preparing for this since Jan 2015.

Did our LDV Vision Summit help you? If yes, how?
The LDV Vision Summit helped us to connect with people who are really good in the computer vision business.  The meetings and discussions we started off at the summit eventually resulted in angel investment.

At traditional IEEE conferences you might find groups focused on the technical areas. At LDV you have a balance of technical and business experts in the audience to network, brainstorm, recruit and many investors to speak with.

What was the most valuable aspect of competing in the ECVC for you?
Feedback from the judges was valuable. Again, it was from people who understood computer vision as/for a “business” solution.

What recommendation(s) would you make to teams submitting their projects to the ECVC?
Startups in computer vision should definitely apply to ECVC. You will definitely meet interesting folks or companies who have been at the summit before but now advancing to the later stages. That will motivate you.

What is your favorite Computer Vision blog/website to stay up-to-date on developments in the sector?
For interesting vision stuff, I read Kaptur, TechCrunch, Tombone’s computer vision blog (written by Tomas Malisiewicz).

Join us at our next annual LDV Vision Summit on May 24-25 in NYC.  Early bird tickets available until April 15.

 

Josh Elman Says Building a Company is Hypothesis Testing

Josh Elman, Partner at Greylock Partners © Robert Wright/LDV Vision Summit

Josh Elman, Partner at Greylock Partners © Robert Wright/LDV Vision Summit

Join us at our next annual LDV Vision Summit on May 24-25 in NYC.  Early bird tickets available until April 15. This fireside chat with Josh Elman of Greylock Partners and Evan Nisselson of LDV Capital took place at the 2016 LDV Vision Summit.

Evan: I'm honored to bring up our next guest, Josh Elman from Greylock Partners - come on up. Hey Josh.

Josh: Hey Evan, thank you.

Evan: Good afternoon.  People are going to make pictures, make videos and there's a couple 360 cameras out there. We don't know what is going to happen.  

Josh: Do you have drones flying around?

Evan: Yes. They're very silent drones. We talked to Bijan yesterday about Lily Robotics, which is the flying camera and all this kind of stuff. In your own words, what is Greylock's focus, size, stage, and what you're most focused on.

Josh: Greylock as a firm has been around 50 years. We're one of the oldest venture capital firms and we try to look for the same things. In companies we look for ourselves building and companies that are building enduring value that can really lay a foundation to be 50 year or more companies, and really build platforms of the future. We really get excited about companies that understand how to build great network effects. Whether they're consumer networks, where lots of people come together. Whether they're marketplaces, where lots of transactions happen. Whether they're data networks, where the company amasses so much data that creates a real advantage to create and sell more and more products, and we see that often.

We focus on consumer platforms. In the past we've involved in Facebook, LinkedIn, AirBnB, Pandora. On the enterprise side we've been involved in companies like Workday and Palo Alto networks, and AppDynamics. We're a billion dollar fund, we focus mostly on Series A and Series B investments, writing checks anywhere from $7 million up to $25 million or so, in a Series B. Or we take real board positions, we really roll up our sleeves and help.

All of my partners have worked at companies that have become very big, iconic companies in the past and gotten very large themselves, and have been very hands on, usually in product management or founder type roles, where they really are driving the actual strategy of the company. We really take a hands on, helpful position trying to help those companies become great and find their way through.

Evan: You worked at some bigger companies and really always focused in product and growth aspects of those companies, like Twitter, like LinkedIn, like Facebook. Tell us a little bit about the differences of those three, as an example, because most people here don't know what happens inside. What's the one nugget of difference between those three that you experienced as an employee there?

Josh: It's funny that you call them bigger companies because when I joined they were a lot smaller.

Evan: What size were they? What number were you there?

Josh: LinkedIn was about 15 people when I joined. Twitter was about 80 people when I joined. Facebook was about 500 when I joined.

Evan: I remember talking to my friend Constantine who was early at LinkedIn, we were having coffee and talking about our different startups, "Well, is yours going to work?," "I don't know. Is yours going to work?,” “I don't know." Fortunately LinkedIn was one of the ones that worked.

Josh: They figured out a lot. I'll start with the quick similarity between all three companies, LinkedIn, Facebook, and Twitter. We grew up in the era, LinkedIn was founded in 2002 ... Launched in 2003. Facebook 2004. Twitter 2007. This was an era when Google was the big dominate company. Everybody looked at Google as this massive, brilliant technology company that can make great technology that does everything important.

At Facebook, LinkedIn, and Twitter the whole thing was, we definitely thought of ourselves as technology companies, we're building new products and everything else, but in some ways we thought of ourselves not as technology companies. We were really just, we were more like human psychology companies, giving people new tools to connect. I almost joke, but we were just putting up forms that people filled out, then we took the information they submitted in the form and shared it with more people. There wasn't a lot of technology in the way that Google kept talking about technology back then.

That was sort of this amazing contrast, it was like they understand technology, but they don't understand people. We were going to build products that deeply understand people - that get the language right, that get the words right, that get the images, the faces, the right content so that when you're experiencing this product, you're experiencing the much more emotional human way, instead of just think you're talking to like a black box that's masterful in technology.


At Twitter, LinkedIn, and Twitter...we were more like human psychology companies, giving people new tools to connect.

-Josh Elman


Evan: As you were there, you threw things up, forms and content, and images here, but how did you know when a couple of them worked and to double down and triple down on those features? Even on the basis of a low level you put ten things up in the span of a month or two, and you said, "That one's it," because they're all trying to work on companies in the audience, or maybe half the folks in the audience.

Josh: We spent a ton of time looking at our data and really kind-of understanding. At LinkedIn we spend a lot of time on virality, at Twitter we spent a lot of time on retention, really understanding what was the overall conversion rates, was there retention behavior. The thing that I always love to remind people, I think is really important is, as I like to say, "Data is the plural of anecdote." You really can just be looking at all this data and talk about conversion rates and everything else, but you have to get to the meat of the underlying stories. You have to actually talk to users. We would talk to users all the time and say, "Why did you sign up? What made you want to do this? Why didn't you want to sign up?"

At LinkedIn we have a lot of people who didn't want to sign up because they thought LinkedIn was a job site and they didn't want to put their resume online. We ended up building up a whole jobs product so we could sort of say, "Hey we do have a job site too. It's a little bit of a business, but if you don't enter the jobs area, LinkedIn's still really valuable for you as a professional." The other thing we spent a lot of time was on language.

The whole way LinkedIn grew was I was send an invite to Evan, "Hey please come join my network on LinkedIn," and we'd have a bunch of language. If we had language that said like, "It will make both of our networks bigger." Then when Evan signed up for LinkedIn, he was more likely to invite more people. We were able to see a secondary viral effect that was even better just by tweaking the language that we put on the tactfully invitation.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Evan: How were you tracking though, because that was still earlier days, and still companies are struggling with how to track the actual 'one word is better than the next.' Were you guys hacking together stuff behind the scenes or was there actually a whole analytics platform, or flying blind with the blindfold on?

Josh: This was 2004. There were none of these analytics platforms or anything like it. We had a data warehouse, so every night the entire LinkedIn database was cloned into a data warehouse. Then the next day I could go in and tinker around the data warehouse-

Evan: What kind of tinkering? What kind of coding? What were you doing?

Josh: I was writing a bunch of SQL queries that were basically selecting the group that received invite A, versus invite B. Because we actually logged the entire chain - if I sent an email to an email address, I could then track what text they got, and then when they came and signed up - I could track their conversion rate and their behavior. I could join all these things.

I could actually test from the original invitation, how many people actually signed up and how many people then sent more invites. I could start to then go the second degree too. It started to get too complicated for my level of SQL knowledge. Now we have much more robust virality platforms. We were having to scratch and claw and make it up as went back then.

Evan: Before you joined there, did you know anything about SQL or did you think it was a kind of food from Europe? What's the deal? Did you just quickly teach yourself one day and say, "Hey we need this. We need this, and we don't have resources," and you did it, or what?

Josh: My first job was as a programmer. I worked on a product called RealPlayer that some of you might remember. Trust me, when I meet 22 year old founders these days and I say RealPlayer it's a total black stare.

Evan: I mention dial up waiting for an image for half an hour and they're like, "What?"

Josh: Yeah. Re-buffering video. I'd been a Windows programmer that was building the RealPlayer client and running the team doing that. I had a lot of coding experience, but I hadn't done much database work. Given that I had already been writing code and shipping software, picking up SQL to do the basic queries I was doing, was fine. As I said, I did kind of reach my limits and every once in awhile I would write an inner joint that they would come and yell at me for, for taking down the data warehouse.

Evan: We heard from Jack Levin yesterday talking about the first infrastructure employed at Google, and would press a couple buttons and the whole system would go down, and you had to take a bike to the office and go pull off the plugs and put them back in. So times have changed.

Josh: Yeah times have changed.

Evan: What did you learn ... First, it's easy to talk about the good things, sometimes we learn from the good things, sometimes we learn from the bad things. What's a mistake you really screwed up that you learned the most from at one of those early companies that surprised you but you learned the most from?

Josh: That's a great questions. One of the biggest mistakes in, and just one of the great learnings. I like to think of working these companies as you're constantly making experiments. Everything you do is an experiments. If you basically say, "We have a thesis and it's either going to work or not, and we're going to build this thing and we'll see what happens." That way I try not to think of it as a mistake because we went in with the thesis and we tested our thesis. Rather than, "I know this is going to work," and then when it doesn't work, you're disappointed.

Evan: It's that psychology again. Did you study psychology?

Josh: I actually did.

Evan: I knew it. It's a perspective.

Josh: I did this great program at Stanford called Symbolic Systems...

Evan: Okay. Now that makes sense.

Josh: Linguistics, psychology, computer science and philosophy. At Twitter, when I joined Twitter it was 2009 and Twitter had already been on Oprah and a bunch of people had heard of Twitter. We had this massive problem where every signup didn't stick around. We did a bunch of analysis, interviewed and called a bunch of users who had done stuff. One of the big things in the company was, we have to change the Twitter homepage. If we just make Twitter easier to understand when you show up on Twitter.com then it will grow so much better.

This was just this common thread in the company. I think it still is today. "If we just change the homepage, everyone will do it." The first project that I got started on, we rebuilt our onboarding flow. That actually worked really well. I got some credibility in the company, and they were like, "Oh, we can change things and actually make progress-"

Evan: “Josh actually knows what he's doing-”

Josh: I made a good guess and it worked. Then they were like, "The next thing, let's go make that homepage." We were like, "Let's go rebuild and design the homepage. Let's make it fresh with like new tweets coming in that show what's happening live. We'll show trends. We'll show all this stuff to make the Twitter homepage much more dynamic so you'll get a taste of Twitter," and this was replacing a page that had a big search box.

If you don't know how to use Twitter and the first thing you do is type something into a search box to search Twitter, that's probably the absolute worst way to try to figure out what the heck is going on on Twitter. We did all these changes. We made this page. It took us several weeks because we had to build a new algorithm that would surface the top tweets and build some editorial things, for instance, that we could immediately take something out if something inappropriate happened to show up, even if it was there.

Then we shipped it, and we actually did an AB test. We're like, "Okay. We'll ship it to half the users." It's very tricky to do AB tests on logged out pages because you just have to cookie people, but they might come from a different computer, but “we’ll try to do our best analysis.” We shipped it and it made zero difference. Literally what we learned or what I assumed that we learned is everyone who showed up to Twitter had so many preconceived notions of what it was about, that actually just clicking to go sign up and hopefully we could teach them through the signup flow made much more of an impact than whatever we put on the homepage. It turned out as we kept doing more testing, just removing all options except for login and signup, actually performed mildly better than everything else-

Evan: Because you hooked them in.

Josh: Well, by not giving them any other ways to click and then get lost, and then go away. It just turned out literally by not adding any features to the page, but adding nothing was by far the best. For a while, again, I've been gone from the company five years and they keep testing this stuff, but for a long time the Twitter home page was basically like type in your email address and password if you're logging in, or first name, last name, and email address to signup. That was it. It turned out that everything else was distracting.

Evan: Regarding that philosophy and testing, that psychology of not everything's going to be perfect - keep on testing. How does that then transition to your mentality with companies that you've invested in early, and come to you and or have board meetings, "This is working, that's not working," and obviously there's some tension there. "What are we going to do? Where are we going to go? Is this going to happen or is it not?" How do you work with them with your deep knowledge on the product side to help the business get to where everybody wants to go?

Josh: It's funny. We think of investing very similar to this philosophy. We think of company building the same way. Reed Hoffman, who founded LinkedIn is now my partner at Greylock. I remember a conversation he and I had in 2004 where I was like, "Explain to me financing and how that works." He said, "Look, it's a series of hypothesis testing. What you do is you raise enough money to go challenge this set of assumptions, and try to see if they're true or not. If you mostly provide them true, we can build a product and people will use it a little bit, and maybe they will share it with friends, then you go test the next set of assumptions. Which is ‘can we build more features or more things that will help them use it more?’ Then you raise more money to go do that and if that works, then you turn it into a business. Then you raise a bunch more money to turn it into a business."

He says the entire process of building a company is a set of hypothesis testing. We very much keep that philosophy at Greylock. When we make an investment we're not like, "This better work. If this doesn't work we're going to fire you as a CEO and we're going to remove funding from your company." It just wouldn't work that way. It's like we're ready to join the journey and we're excited for the next hypothesis and we're giving you this much capital right now in order to go try to prove it. If we get towards the end of that amount of capital and we haven't proven enough of our assumptions we should rethink what we do with what you've built, with your own career, with where you're going.

If, by the way, we've proved them faster, we should go raise more money and get a lot more aggressive to keep going to prove them. It has been a couple of years now, and I've already gotten to see both sides of that journey. It's incredible. We just keep it as this testing. It's never failure. It's are we testing enough and at some point you may decide that with this cap table and this amount of money, and this set of people that you've hired on this journey, maybe that's enough to try something new and keep going, or maybe it's time to rethink things-


The entire process of building a company is a set of hypothesis testing. We very much like that philosophy at Greylock. When we make an investment...it's like we're ready to join the journey and we're excited for the next hypothesis and we're giving you this much capital right now in order to go try to prove it.

-Josh Elman


Evan: There's a hard question I want to ask and it's tough to pinpoint, but actionable. Some of the ones that are doing great, fantastic. They're doing their thing and they have the secret sauce, whatever it is. The ones that are not, are the harder ones on both sides. It's kind of like, if you failed once and you've learned - great, if you failed twice and you learned - okay, you might still have potential, but if you failed three, four, five, and six times it becomes a potentially negative trend.

If you do the hypothesis that doesn't work for four, or five, six times, is there coaching from your experience to them because you look at network effects that you've got experience in it. Give us an example of things that you said that you thought was going to work with them three, or four, or five times, and it either never did or that one thing came out of nowhere just from the story behind the scenes to give people a flavor of doubling down and tripling down. How many times can I fail before I give up?

Josh: Look, you guys might know I invest in a company called Houseparty (previously known as Meerkat) just over a year ago, I invested right before Southwest Southwest. I'd been doing live video since real networks and I believed they were just in a world where mobile live video was about to happen, where everyone in this room could whip out their phone and be broadcasting live video immediately.

I'd met Ben Rudman, who by the way, Houseparty was already the third product of that company because he had started it several years before and built a team in Israel that said, "We're going to build something really important for live video that we think is going to be amazing." I had gotten to know him and we'd share a lot of ideas on live video and when Houseparty kind of popped late February, early March of 2015, I was like, "Look Ben I'm really excited. This may be the moment in time where the phones and the network capacity can handle it, where we go do this." I had also known that Twitter had bought Periscope, a different live video company, and they hadn't announced that yet or launched it, but I knew about that and I said, "But I'm still going to go back you and this is going to be such a big pie that we're going to go figure out what our share of it is along with Twitter."

Fast forward four or five months, Houseparty was actually doing reasonably well versus Twitter. I think Twitter was ahead and Periscope was growing a little bit faster than Houseparty but we were holding our own. Then Facebook comes out and say, "We're going to do live video now within our network. Any body who's got a celebrity or an audience on Facebook can just go live and go to their whole audience." We were like, "Oh man. I know we can maybe compete against Twitter and that's hard. But competing against Twitter and Facebook, and really, just competing against Facebook really hard. Right?" Facebook was like, "Oh let's go live," and four billion people just show up immediately.

Evan: That's kind of challenging to deal with.

Josh: Live video is much more about the intimacy and the authentic nature. Part of the reason that we're all here is to have a much more authentic experience. When we're in the moment versus somebody can watch later. It's not just about numbers, but numbers do help. We learned that through that experience. We said, "Okay. This Houseparty thing, trying to compete against broadcast live video may not be the thing."

The team has really hunkered down and is working on something new again. We're seeing some really interesting, early positive stuff. What they're doing is much more around group social video. It's really exciting to kind of see they just keep taking cracks at it and the nice part was we all gave them enough capital to be in a great syndicate to join. Enough capital to really go try to prove the future of live video.

It wasn't just a bet that Houseparty must work or bust, it was a bet that Ben and the great team that he's built at the company, the company's actually called Life On Air, because it's about how to you live your life on air. That company is going to go produce something great. I'm still excited, but they also have a runway. At some point the capital that even we gave them might run out. If we don't have ourselves in a position where this company's worth a lot more capital, then we'll figure out what the right thing to do with the company is.

Evan: Cool. Relating to Houseparty, but also probably Jelly and several other companies that you've invested in - we started our pre-interview chat online about a couple weeks ago with a lot photographs of your face which related to the title, because you think faces are the key to social platforms. I thought that resonated with this audience and with this topic. Tell us a little bit more about why is faces is the most valuable part of social platforms?

Josh: We did these eye tests and heat map eye tests of Facebook and Twitter back seven or eight years ago. You could immediately see, just as humans we're programed to recognize a face and the shape of it, almost as babies. You could just see as people go read a Facebook feed when there's faces as the profile icon, people's eyes immediately gravitate towards that and then read the content. Then they go to the next one and then read the content. That's a really important thing.

When you're reading a set of content on Facebook or Twitter, you may not realize it but you're reading the name and identity of the person and then what they say. Then the name and identity of the next person and then what they say. That's what creates this very conversational, very human nature of the whole experience. That is very, very hard to replace. There's other content platforms where the name and identity isn't important. You can go to some of the anonymous platforms like at Yik Yak, and you realize that's just content there's no names or faces, and you actually read it very, very differently than when you're reading stuff on Twitter or Facebook, because those faces are just so key.

That identity is really what keeps people there. I meet people that I have been following on Twitter for a long time and I've seen these words next to their face for a long time and I feel like we really know each other. We're just having this dialog in conversation tied to their face. Then you look at Snapchat which came out more recently, and it was like the quintessential version of the face. It was like the most authentic faces. People make funny faces and awkward faces and faces before they've cleaned themselves up in the morning-

Evan: Which of those faces drive more traffic? That's interesting because the facial recognition, a lot of the computer vision we've discussed yesterday and today and in this sector. At what point can almost the algorithms start delivering on what we understand about sentiment analysis, and about what the face is doing? What the personality of the person with the face is doing?

Josh: It's interesting, the things that we've seen that drive the most traffic are when the face changes.

Evan: You mean in animation or over time?

Josh: No, over time. Actually like when you change your profile picture the single biggest way to get more clicks to your profile, or have more people looking at it, or actually getting your content get more likes and getting more comments is actually to change the profile picture, because all of a sudden we get used to seeing you look the same way in conversation. When you actually change it that's actually the biggest single trigger to doing it.

If you want to just take this trick and go get a bunch more likes or comments or retweets, or whatever social platform you're using, go and change your profile picture like every week or so. You'll be surprised at like, "Whoa, that really works?" When you get a new haircut and everybody that knows you like, "Nice, new haircut." It's exactly the same philosophy to do that.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Evan: I'm sure you can probably test also the type of comments you get if you put an unhappy face or depressed face, versus a laughing face. We've talked about some deals you've already done and towards the end of last year, you wrote about four or five topics that that you were interested in. For the audience's sake it was: live conversations, interests groups, better self expression, preserving all the that content we make, VR/AR. Now, you can imagine in additional to begin a great guy and a smart operator and investor, you are very focused opportunities that relate to everything in this space. Would you have add any to that list since that last five months, that is the newest, and latest, and greatest?

Josh: I think all these are really important, because I think we are more connected than we've ever been to people around us. Yet, I think we are somewhat less fulfilled by that than we've been in a long time. I feel like Facebook and Instagram and Twitter have gotten us so good at shouting at each other, even Snapchat stories to some sense, where you are looking at each other in their best moments when you're bored on the couch, or late at night, or in bed. We're not having as many intimate real experiences. A lot of the stuff around connecting people around the interest groups, connecting people on more live moments or live conversation is really about creating these much more human interactions about the stuff we care about that happens. That kind of is much more enriching versus seeing somebody else have a great time at a party and the time that you're really looking at that is when you're not having a great time, because that's when you have time to look at your phone.

I think there's a lot that's going to happen there. I think all of that is still stuff that I'm looking for, because we haven't seen anything really transform or break out. I'm also getting more excited about the Lily and some of the connected camera stuff that we're really about to see. I think we're moving in a world now where we're so used to having a phone in our pocket, but that's also still one too many steps. Pervasive cameras, and cameras that can get at brand new perspective we never could do-

Evan: So internet of eyes?

Josh: I love your phrasing on the internet of eyes. Have you guys seen the hover camera, which is a new demo and it will come live sometime soon? Watching that video it was like an actual camera that we would feel comfortable in here looking at and having it get a new perspective that we could never do without really expensive equipment.

Evan: Whether or not we're all going to have our personal flying cameras hovering outside, waiting for us to leave to follow us - it could be a good thing and a bad thing. It's a little odd. It’s almost like a parking lot of flying Lilys waiting for us.

Josh: I think it's really interesting. We talked about that, wouldn't it be great if they were silent. The problem with physics is it's actually still really hard to lift a piece of equipment and have it be silent. I think it's also hard to have a camera and a computer and all this stuff, with enough battery life that we don't worry about it crashing on our heads. I think we're a little bit away before it feels like something that we're like comfortable with around all the time.

I think we're going to start seeing us wanting many more moments captured so that we can actually be in the moment. The whole problem with the camera is you either do it with a tripod or you're never actually in the moment. Growing up there's like three pictures with my dad, because he was the family photographer. I think that's going to be a big trend.

Evan: One of the things that I would love is basically, you came from California, flew here, you went to the airport and took whatever transportation, you walked across town or do whatever, there were a thousand cameras that you passed. You don't control the access of that content, but what if you could? If it was say, "Oh Josh arrived here. These are three pictures. Do you want them?" Or they should be sent to you as a service. That identity tracking people look at internet of eyes sometimes negatively, sometimes positively. That's the positive side. I think we are in many places but why do we have to make a picture or have somebody else? There are pictures being captured which have value.

Josh: I think that's a great point. I think as we move into this world, at some level there's a massive debate about privacy but I think the reality is it's always this trade off between privacy and convenience. Or privacy and conductivity.

Evan: Isn't privacy dead?

Josh: At some level privacy is dead.

Evan: At most levels. It's pretty much dead.

Josh: In the United States I feel comfortable saying, "Look, there really isn't anything I'm doing that is so private that I worry about being used against me." I do respect that people from other environments and even in the political environment we may be entering here, people may actually get a lot more worried about actually the things that are naturally doing could be used against you. I don't think privacy and encryption is going to be dead until we really live in a world where we don't have governments or other criminal activity that can really-

Evan: That's always going to exist unfortunately. I mean I'm a very private person with many things and there are other aspects that I'm very public. We're here on stage and we're very social in sharing views online. There's aspects that I want to be private. I still believe because of the growth in technology and the evolution what's going on, privacy to what we know it is definitely dead.

Josh: I think it's fair to say that the expectation of true privacy in anything we do is pretty much dead.

I think there's a really important thing which is I think it's sort of like obfuscation is going to be the key. I don't really share much about the fact that I have a child on Twitter. If you really follow me on Twitter, you could barely detect that I have a kid, but if you follow me on Facebook or on Snapchat I'm very public on those platforms about my kid. I actually think we're going to get much more into selective sharing, and I think we're going to see a lot more happen.

Evan: That's a great question. Let's get the guys with the mics. We are going to start having questions. We have about seven minutes left, so anybody has questions raise their hand in a second. Regarding that question of sharing, you specifically said you don't share photos of your kid or talk about it on Twitter. Where do you like to share what photos? Do you have a mental filter or is it just a natural process that's evolved? What do you share on Snapchat versus Twitter?

Josh: On Twitter having worked at the company and sort of having lived this slightly more public life, I love talking about everything that goes on in technology that comes to my mind. I love talking about things around sports. I'm a huge Seattle Seahawks fan.

Evan: That's Twitter or is that Snapchat, or both?

Josh Elman Greylock LDV Vision Summit

Josh: This is mostly Twitter because I find that that's where I get the most engaging conversations around topics of technology and that, and a little bit of politics, and a little bit of world things that are happening. On Facebook and on Snapchat I'm much more creative about who my friends are and the people that I'm sharing with. I just share much more about life. Sometimes it's a little bit about work, but a lot of times it's more about my personal life and family. My mom likes just about everything I ever post on Facebook.

Evan: That's a good thing.

Josh: That's great thing.

Evan: If she didn't does it upset you?

Josh: No. Sometimes. She doesn't need to like everything that quickly.

Evan: That's a sensitive issue? Why did you not see it, did you not like it enough? Do you ask her?

Josh: No. It's just like “okay do other things other than Facebook all the time waiting for me to post.”

Evan: She's waiting for you. She wants to live vicariously your life.

Josh: I do think about, I just try to share real life there and on Twitter I just don't feel as comfortable living normal life publicly. Sometimes we all have our first world problems that we occasionally whine about. It's fun to do that with the right group of friends and it's embarrassing to do that on Twitter when everybody yells at you for like, "Why are you complaining about your Uber going too slow?"

Evan: Questions. Who's first? Over here, Rick. Who's got a mic? Okay. Go first.

Audience Question 1: Hi, Rick Smolan. I thought that LinkedIn's purchase of Lynda.com was brilliant. I was on an airplane the other day and you can spend six hours now learning Photoshop or whatever you want. I'm curious to how that decision was made? Likewise, all of a sudden LinkedIn's share price dropped in half on one day, which terrified the whole market. I'm just curious, it seemed to me that the opposite should have happened after Lynda.com, just curious if you could talk about those two?

Josh: Yeah. I'm not involved in LinkedIn other than being partners at Greylock with the founder - I'm just a shareholder. These comments are totally not associated with the company at all. LinkedIn from the very beginning was how do we actually help your professional life? We talked about this in 2004, in fact when I was there, it was how do we help you be a better professional? Obviously one of the ways was give you access to your network and give you access to your network's network, so you can actually reach people in a way that you didn't need it. Especially if you're talking about the hiring views case, we spent a lot of time talking about the expertise views case. If I want to find somebody who is an expert in Photoshop or is an expert in privacy, or is an expert in the legality of something, we really wanted LinkedIn to be the place you could do that.

As they've gotten more into empowering the economic development of everybody, they realized that giving you the platform to learn and create skills were incredibly important. Lynda, by far, was the leading platform. They had created so much great content. Had so many educational ways to create great skills. LinkedIn just saw it as move forward and trying to help professionals be better - they saw that as a great fit. I think so far, I hope, it's been working really well for them. I think that the future of a platform where LinkedIn knows more about me as a professional than anywhere else, it can really help be be even better, I think is going to be great.

In terms of the share price, I think the stock markets and the innovation that's happening at companies is sometimes out of whack, and doesn't always understand. It's LinkedIn's job to keep building a great business, and prove it.

Evan: The next one's over here.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Audience Question 2: Hi Josh. Adaora, Rothenberg Ventures. I'm just wondering about what your thoughts are on what some consider the VR/AR hype and what others don't?

Evan: Good question.

Josh: If you were to ask me in five years will everybody have a VR experience on a frequent basis, that's weekly or a couple times a month, or will AR be something that will be much more pervasive? I think the answer to that is probably, yes. I'd be really surprised if it's not. If you ask me if that's in two years. I don't know. I think VR is a great entertainment experience right now, but I haven't been comfortable in there where I've wanted to stay in for a long time. I think there's a lot of physical technology improvement still to happen. Let alone all the content and great experiencing to get created. I think we're over hyped in the short term, but probably not over hyped in the long term. Sort of in the same way the internet was in 1999. It was like, yes all these things will happen online. It will be incredible, but it might not happen in two years.

Evan: Which is one of the biggest challenges as an investor isn't it? When to invest in those people?

Josh: That's the thing. We've made I think one stealth VR investment. Our theory is small teams building things that can be really core building blocks that as it emerges can sort of scale up with it, gets us really much more excited that a bunch of companies that are spending a heck of a lot of money right now to build all the demos and everything else. I, by the way, think most of the way that most people are going to experience VR is going to be outside their own home over the next year and a half or two years. Very few people have the devices, but you'll go over to your friends house. I think even more there's going to be a lot physical locations you'll be able to go to that will become fun, kind of like what arcades used to be. Then in a few years it will get back into all of our homes.

Evan: Good analogy.

Audience Question 3: Josh, we in the media this spring really like to write about bots. What's it going to take for bots to go from hype to something that actually is relevant for people?

Josh: I think that's a great question about bots. We love talking about VR and bots because we know that they're new thing that are really important, and yet the number of hours people spend in messaging apps is going up exponentially every single year. Right now we're all really good at talking to our friends, and the first time that I want to talk to a friend now I don't think about phone calling them. I think about messaging them, and getting a message back. Yet, when we want to talk to businesses, we want to talk about information elsewhere you kind of have to call a business or go to their website or something else. Where actually messaging actually is often the best interface.

I think we still are confused about bots. Right now the (neuro-linguistic programming) NLP isn't quite there. We have very high expectations that if we say something, we expect a human on the other side to understand it and respond back. NLP is close but not quite there. None of these experiences are great and you start to learn this very cryptic language to interact with your bot. I think we are a few years out on bots being natural to represent everything. I think in the shorter term, we're going to see all this business behavior, all the phone (interactive voice response) IVR trees, press one for this, press two for this, way better done in a bot. Just open up messenger, type in the name of the business like Comcast, go through the exact phone tree, and you can actually everything do much faster than you could sitting on the phone.

I think we're going to see all of that happen in the short term. I also think we are going to see bots that are content delivery, whether it's your shipment just got mailed or a daily newsletter, or a breaking news alert pushed into your messaging, because that's where you are spending all your time and where you're doing all your content. The interactive bots I think are going to take a little bit longer to play out. I do think that much shorter term we're going to do a lot more in messaging than just message friends.

Josh Elman Greylock Partners LDV Vision Summit Startup Investing

Evan: All right a couple last question to end off. These are fun and challenging questions. Why do most entrepreneurs not succeed?

Josh: Because it's really, really, really hard to take a great idea, a great group of people, build it, hit the right market at the right time and get it in the hands. You have to think of it, instead of why don't most succeed, how the hell do the couple that really do, really succeed. It's so much luck and you can do all this hard work and get in a great position, and if you don't also get lucky at the same time to capitalize on all that hard work you've done, it's just doesn't happen. It doesn't always happen.

Evan: I didn't mean to be negative, but both of those is the right ways to look at it in relating to that, one of the questions I love asking all investors, because it's actionable I think for the audience. You speak to a lot of entrepreneurs, your favorite personality trait of an entrepreneur - one word answer and your hated personality trait of an entrepreneur - one word answer.

Josh: We're in a Vision Summit, so the number one word I would use is vision. Just somebody who can paint this picture of a world five years from now and just get me excited and intoxicated about that. The other word I like to use, I'll give you two, is learner. Somebody's who's constantly learning and processing new information and coming up with new theories on the world. The most hated, arrogant trait is, what's a non-listener. Somebody who doesn't listen and learn and interact that way.

Evan: Mine would be, as I said yesterday, passion and actually for me it's selfish. A CEO that's selfish. Some may like that because they're going to do it no matter what, but if they're not a team player and thinking about the whole ecosystem I think they're deemed to fail. Josh thank you very much. This is fantastic.

Josh: Thank you everybody. It's a pleasure.

The annual LDV Vision Summit will be occurring on May 24-25, 2017 at the SVA Theatre in New York, NY.

 

Applications of Computer Vision & AI Will be Life Transformative

(L to R) Jessi Hempel, Senior Writer at Wired, Liza Benson, Partner at StarVest Partners, Howard Morgan, Partner & Co-founder of First Round Capital, and Alex Iskold, Managing Director at TechStars © Robert Wright/LDV Vision Summit

(L to R) Jessi Hempel, Senior Writer at Wired, Liza Benson, Partner at StarVest Partners, Howard Morgan, Partner & Co-founder of First Round Capital, and Alex Iskold, Managing Director at TechStars © Robert Wright/LDV Vision Summit

Join us at the next annual LDV Vision Summit.  This panel, “Trends in Visual Technology Investing” is from our 2016 LDV Vision Summit. Moderator: Jessi Hempel, Senior Writer at Wired. Featuring: Liza Benson, Partner at StarVest Partners, Howard Morgan, Partner & Co-founder of First Round Capital, Alex Iskold, Managing Director at TechStars.

Jessi: I'm Jessi Hempel. I'm a senior writer at Wired and a big fan of Evan Nisselson of LDV and here with a panel of folks to discuss a topic which I'm sure is going to a surprise to all of you, visual technology investing opportunities. It's kind of what the day is about. I could introduce our panel but I'm actually going to let them each introduce themselves because I'd like you each to also give a couple of lines about the institutions you represent. I know we've got a late stage, an early stage, and an incubator guy. Alex, let's start with you.

Alex: I'm Alex Iskold. I'm managing director here at Techstars in New York. Techstars is I think a world class accelerator. We help early stage companies go faster by surrounding them with incredible mentors and helping them figure out the business and secure capital.

Jessi: Thanks.

Howard: Howard Morgan. First Round Capital, early stage ventures and we try to help them grow.

Liza: I am Liza Benson of StarVest Partners. We are an expansion stage venture fund focused on primarily B2B SaaS and technology enabled business services and typically we invest in companies between 2 and 20 million of revenue.

Jessi: We've kind of got the whole gambit represented on our stage. So, I want to start big and I want to start by framing this according to something that Evan has spoken and written quite a bit about, this idea of the Internet Of Eyes, this idea of the advancement of the Internet Of Things into a sort of a visual representation. The idea that all of the objects around us could be watching us and that unlocks opportunities and also terrifies me. Let's talk about what those opportunities could be. Let's imagine a little bit.

Howard: Well, obviously if it knows what I am doing, I've just woken up, it can take actions. It can see that I actually got out of bed or I didn't. In which case it'd have to wake me up again, you know and sort of go from there to try to understand my intent for things, if I'm willing to let it watch me all day. I'd like to be able to say, no, for a while. You know, turn it off, turn your eyes. Close your Internet Of Eyes for a little while and believe that it's actually closed for a while. Obviously, we've already invested in things that are doing that in shopping, and stores, and so on and the same thing is going to be true in factories.

(L to R) Jessi Hempel, Senior Writer at Wired, Liza Benson, Partner at StarVest Partners, Howard Morgan, Partner & Co-founder of First Round Capital, and Alex Iskold, Managing Director at TechStars © Robert Wright/LDV Vision Summit

(L to R) Jessi Hempel, Senior Writer at Wired, Liza Benson, Partner at StarVest Partners, Howard Morgan, Partner & Co-founder of First Round Capital, and Alex Iskold, Managing Director at TechStars © Robert Wright/LDV Vision Summit

Jessi: This idea that cameras are infiltrating our lives, are creeping into objects around us has been, it's been ongoing for a while. So, let's talk about the specific opportunities that 2016 unlocks. You know, I've had a camera in my iPhone for a while. What's new and maybe I'll jump to Alex because you're seeing a lot of what's new in the companies that come to you.

Alex: Yeah. I'm trying to figure it out. I have no idea, but in my camp this falls into separate categories. There's sensors. There's things that detect moving objects. There's things that have cognition and recognize things, and then there's algorithms that actually act on the information that sensors capture, and so I would bucket them all in slightly different categories and then kind of re-assemble to determine how it could be helpful. Let me start with an idea of applications for people who are visually impaired. Could we build something that is a camera in front of someone's door that would take pictures and actually recognize family members and caregivers and other people that come in and would tell you who is there? To me that's like a super interesting and pragmatic application. Way more useful than something following me and trying to be helpful where I don't need its help.

Jessi: For sure. For sure, but I guess my question is, are you seeing those kinds of companies right now?

Alex: I'm not running the IOT program. Jamie Fielding does. I do see some of these. So, the areas that I've been focused on, for example, is more of like frontier tech. Satellites, taking pictures and running some sort of computation. My focus is mostly on, not necessarily the sensors but, what do you do after you capture the data?

Jessi: Right. How much, when we're talking about sort of visual technology, how much is visual technology integrated into talking about technology that it's maybe not necessarily as useful to even put the framework visual on it and how much is unique to adding cameras to things?

Liza Benson, Partner at StarVest Partners © Robert Wright/LDV Vision Summit

Liza Benson, Partner at StarVest Partners © Robert Wright/LDV Vision Summit

Liza: I think that visual technology enables you to go into the real world so that you're not only dealing with online. I mean, in our fund we've invested in some things that are looking at in-store analytics because most shopping, you know, 80% of retail still happens in stores.

Jessi: Right.

Liza: So that's a really important thing that visual technology enables us to do that is separate from you now, eCommerce technologies or something of that sort.

Jessi: Tell us a little bit more about that, because you were talking about a company, RetailNext...tell us a little bit about that company.

Liza: What they're doing is using visual technology in stores to actually create the same sort of analytics that you'll see online, in a store. How do people dwell? How do people navigate around a store? All those things are done via cameras and it's some very interesting insights for large retailers out there.

Jessi: So, where are retailers on the spectrum of their capability of using it? Is this R&D at this point? Are they actually deploying it?

Liza: No, actually deploying it but I think you know, absolutely nascent in terms of penetration amongst retail stores to have this type of technology everywhere.

Jessi: What is it going to take for it to be deployed more widely? Is this a question of computer comfort?

Liza: Time, money. I think it's just a matter of time.

Jessi: Yeah. Got it. Howard?

Howard: Well, one of our companies which does software for drones and the cameras is involved in doing inspections of power lines, there are lots of sensors on power lines but if you want to see if an insulator is broken, you fly the drone by and you can see, “gee that insulator looks a little bit off” and you can zoom in and you can fly around. You can do all sorts of things visually to inspect things in out of the way places. Sensors alone, non-visual sensors alone, just don't give you enough information.


Tickets for our annual 2017 LDV Vision Summit are on sale
May 24-25 in NYC


Jessi: Fair enough, and the name of that company?

Howard: Airware.

Jessi: Airware. Cool. You had mentioned another company when we were talking earlier. Is it Nanotronics?

Howard: We talked earlier about Nanotronics which is able to take very high resolution images. Super high resolution, almost the kind of things you get from electron microscopes, captured in an optical methodology and then put it in real time. For areas that you want to have very high resolution, because the defects that they're trying to find visually are way too small for the naked eye to see and way too small for normal cameras to see. Visual technology is advancing dramatically.

Jessi: So, where are those companies in their lifespan right now?

Howard: Airware's a few years old. Has raised a lot of money, with us, at Kleiner Perkins and others and is selling. Nanotronics is actually close to break-even and is installed in a lot of companies already. So, they're pretty far along.

Jessi: Pretty far along? As you are in investors out there looking, what is the flag that tells you that something has promise in this area, that you might be interested in investing in it?

(Speaking) Alex Iskold, Managing Director of Techstars © Robert Wright/LDV Vision Summit

(Speaking) Alex Iskold, Managing Director of Techstars © Robert Wright/LDV Vision Summit

Alex: Just following up what Howard said, there's a company that just graduated Techstars folder that's applying drones to building inspection and it's paired sort of like it's human smarts and software and drones to all act together, because it's super dangerous to go up those buildings. Now, going back to your question, in my mind this is an phenomenal application of drones and vision. It's sort of solving very obvious real life problems. I don't know if it's necessarily a billion dollar company but it's certainly solving a real problem and improving people's lives and potentially saving people's lives. So, when I look at like these texts, I think the challenges is that some of this stuff is very far out there. Like, if it's research projects. So when we look at the companies we ask, can we help accelerate them now? Is it an investable business? I'm sure you know investors at different stages kind of looking at the tech through different lens.

Jessi: Right.

Alex: What's the maturity of this tech? I think it's an interesting question.

Liza: I think it's like anything else. It's efficiency and doing things that were either too expensive or too difficult to do before. You know some of the things that you were mentioning in terms of examining power lines. How could you possibly send a guy up on every pole to look at all these power lines? I mean, it's essentially economically impossible. We don't invest in drones. It sounds very exciting but, I think from an investor’s standpoint it's completely disruptive to the current way of doing things.

Jessi: Fair enough. So, what would you like to see improve so that some of this technology gets even better?

Alex: I don't know. I was thinking about the company that does sort of semantic object recognition and boundary detection and the application that I can see immediately is in augmented reality. Because if you were to build any kind of augmented reality you actually need very precise way to identify where you are and navigation. I think that is an example of something that's in the lab right now but me as an investor and a business person. I actually see relevant application and related vertical that maybe researchers don't necessarily see.  I think there's an advantage in these kind of forums and mixing us together.  


Something that's in the lab right now, me as an investor and a business person, I actually see the relevant application and related vertical that maybe researchers don't necessarily see. I think there's an advantage in these kind of forums and mixing us together.

-Alex Iskold, Managing Director of Techstars


Howard: Two things. One, I'm sort of old school. Better, faster, cheaper. That's kind of the usual mantra in technology investing, but also particularly when we're talking about vision we mostly think about human vision, but a lot of what's happening in agricultural inspection and in medical is outside the visual space. It's near infrared. It's ultraviolet. It's X-ray. So, expanding vision and those sensors are just getting to the point where they're cheap enough to be used. Visual sensors, because of camera phones are almost free.

Jessi: Right.

Howard: You can put cameras everywhere, but if you want near infrared or if you want ultra or you want X-ray those are still pretty expensive and we need to move those along.

Jessi: Were the cost to come down on those Howard, what could they begin to unlock?

Howard: Well, that's what we have entrepreneurs to dream up ...

Jessi: That's your job, guys. (To Audience)

Howard: Those things but think about X-ray technologies. If you had X-ray, there are a lot of military uses. You want to see in the buildings where the terrorists are. You want to see where the bombs are that are hidden in the walls. It's stuff like that, medical also. We heard about a company, MD.ai looking at radiological images. Right now you have to go through an MRI. That's a pretty painful or at least uncomfortable thing. Why can't we get visual inspection of the human body the way we're doing visual inspection of other things without the same level? So, I think a lot of things could be interesting.

Jessi: Fair enough. I want to go back to what Alex said about AR. It struck me when Dijon was on stage just a little earlier. He said, “I want to like AR.” And he also spoke about 2D, 3D, 360 images as being sort of a bandaid fix until we got to VR and AR and I'm curious what each of your perspectives are on the future, the immediate and the far out future on these technologies.

Alex: I like both AR and VR in the sense that I see them as like fascinating and mind blowing. I quote my daughter. My older daughter tried on a VR headset and when she took it off she said “Daddy, virtual reality's much better than reality.” This is one of those sentences that gets etched into your brain and like until you go senile you're going to remember it.

Jessi: Alex, how old is your daughter?

Alex: She's 12. So, like my quick download on AR, we haven't invested in any of those companies but I'm very excited about it. In terms of the bandaids, there's a Techstars company called Sketchfab which is the largest repository of 3D models and they just released a VR viewer. Suddenly it becomes much more fascinating because you literally can go through a British museum and experience these artifacts and it's literally a mind blowing experience. So, I think that it's very true that 3D itself is a little "boring". You know, from the consumer perspective, but I think putting that stuff in VR becomes pretty incredible.

Howard: We have a company, Parcelable which is AR and selling into the oil well industry and basically the AR is used when you need hands free operation for people fixing machines. So, they're able to see the instructions. They're able to see what the next steps should be but instead of having a tablet or a book and when their hands are getting oily and dirty and they're holding tools, they're able to see it all right in front of them basically through the AR. Very effective usage and it started out using Google Glass. Now it's using other technologies, but AR has real uses in the industry sector when you need hands free operation of things and you need to see instructions.

Jessi: That makes sense to me when I think about the oil industry because I would think that they have the deep pockets to finance a tool like that and it could be very helpful to them. Are tools like that at any time in the near future going to be accessible to consumers in a useful or viable way?

Howard: You want to put together your Ikea furniture and you know, how many of us have done that? Right? Wouldn't it be nice to have that AR giving us the instructions and have the vision pick out which is that screw that it's telling me that i need to have? I think that is going to be viable for consumers.

Jessi: Sure, sure, sure, but just to push back on that for a second. I've piloted the HoloLens and I actually fixed a light switch with the help of an electrician and with the Hololens but I still had to have it connected to this big ole computer behind me to use and the field of vision was small. So, it felt like actually it was pretty far from reality.

Howard: Yeah, it's early. We're focused mostly on like B2B applications but I think for example going back to an earlier point and you know, situational training. Like firefighters, nurses. Do we believe in like physical training? Do we believe that it's better to kind of like be in the lab and try stuff out versus just read a book? Sure. Now, therefore once VR and AR achieves its sort of like level comparable with reality, becomes incredibly interesting. To your point though, I do think it's pretty far out. I wouldn't bet on it going mainstream in the next, three to five years. I mean, three to five would be probably most bullish that I would get.

Jessi: Liza, how closely are you paying attention to these trends?

(Speaking) Liza Benson, Partner at StarVest Capital Partners © Robert Wright/LDV Vision Summit

(Speaking) Liza Benson, Partner at StarVest Capital Partners © Robert Wright/LDV Vision Summit

Liza: Personally, I would love it because it seems like it'd replace a husband for anything that you really needed done...

Jessi: Totally true, right?

Liza: As a person in a personal level I would love it but we haven't really seen anything at the expansion stage in the B2B side. These all sound really early. Some of the things that you're imagining, Howard, would be very interesting to us because they're applications that are used in the non-consumer, in the B2B space. But it does sound a little far off but I would certainly be a user of it.

Jessi: Fair enough. I think we all would. Do we have any questions from the audience? Anybody out there? Okay. This is your two minute warning to come up with something good while we jump back in here. So, what happens then to, you know, as we move into a world of 3D images do all our images become 3D? What happens into 2D images?

Howard: I don't think they do. We've had 3D movies since the '50's and they've never taken off. We've had 3D stereoscopic images and except for certain uses they haven't been that critical. So I kind of am with Dijon on the sort of 3D thing. VR is a very different experience than looking at a 3D image. Back in the '90's MED'A Creations bought a company called Real Time Geometry that could take 3D photos of objects and we were putting 3D pictures on the web so that you could look at what you were going to buy and turn it around and go, and it just never became a very critical thing.

People are used to very much seeing things in 2D, through catalogs. We've been sort of growing up with that. The 3D need was less than we thought. The 3D movies, Avatar was an amazing success and then kind of nothing. There's IMAX 3D things that you see every now and then but the 3D experience is not, those movies do just as well or better, make more money in the non 3D versions than they do in the 3D versions, and except for some specialized areas like real estate where you want to do walkthroughs we just haven't see that.

Jessi: Got it. So Alex, do you think anything is different now? You think we are going to want 3D now? I take your point that 2D's not going anywhere but is 3D then really upon us? Because now that I think about it I totally remember Avatar. It feels like a lifetime ago.

Alex: It's situational, right? I think we're framing the question as 3D versus 2D. I feel like it depends. So, example, if you're an engineer and you have a 2D sketch versus if you create a 3D model, you put more inputs into a 3D model. I think that what's been a trend is that, notoriously, in enterprise but also in consumer, visualizations aren't always winning. Example, on Wall Street they still love their Excel. There's been so many companies trying to go “hey, I can visually show you the portfolio” and the traders don't care.

Now, “why?” is a totally different question but I think it depends on what the application is. In B2B VR and 3D is already starting to make input certainly in architecture. We have a company called IrisVR that is basically helping you walk through the building before it's even built, so that's helpful, but with consumers, to your point, I think VR is going to be, the best 3D we've got, once it works really well.

Jessi: So when you're thinking about visual technology where does a consumers sense of privacy fit in as you're piloting some of these? I mean you guys talked about some very cool early stage applications, for example, and they all require to me to be pretty comfortable with the fact that I'm being photographed.

Howard: Well, I brought up earlier this morning the Enemy Of The State movie or Dave Eggers' The Circle. You know, there's pretty dystopian views of what can happen. Even go to London and you're on camera pretty much anywhere you are in London. So the answer is, it's not that we have to be comfortable. The people who own these images are going to be controllable and use them in good ways and most people have proven over the last 20 years of the internet they're willing to give up quite a lot of privacy in return for pretty good other values. Other things that they get whether it's social networking, shopping, etc. I don't know how far that'll go.

Jessi: You mentioned some dystopian tales that are familiar to me. Can you guys think of any, any tales that are the opposite? That offer a hopeful view of what the world will be like once we're able to capture it all?

Howard: Well, you can see accidents. If the image recognition says, gee we just noticed an auto accident in this out of the way place you can dispatch emergency life saving there minutes before it would've ever been reported and you can see that in lots of possible areas. You can see it in hospitals where the patient monitoring as a visual characteristic. So, I think you can see a lot of positive things, they're just, they're not happening as quickly.

Jessi: We don't tend to write science fiction about them?

Alex: I mean to me, I'm just horrified at all the false positives. I'm an algorithms guys. I'm a math guy and I find it insanely unhelpful when things around us are trying to suggest us things and try to butt in. Because they're doing it in such a quirky and incorrect way, I think that yes, there's going to be incidents when we have algorithms that detect things that are going to save lives, but my question is, how many times this algorithm is going to blow the whistle and be incorrect? So, I think we need to give them a chance on one hand. On the other hand, I don't know if I want every millisecond of my life to be captured and analyzed in catalogs. I kind of want to be human still a little bit. I don't know. Maybe I'm getting old. Who knows?

Howard: Well, Gordon Bell's gone around with recording his entire life for the last about 20 years or so, at Microsoft and of course there are some issues in some states with the people he's photographing and particularly if he's recording them he has to get permission. But you know the old Greek philosophers talk about the examined life, if you're on camera 24/7 it's easy to examine and re-examine that life.

Jessi: The flip side of that is if you're on camera 24/7 it's easy to let go of the habit of actually examining your life because it's always there for you.

Howard: True, but look at cameras for the police. The trend now to put cameras on police and that's certainly impacting the way that they interact with some members of the public and probably for the better.

Alex: Jessi, also going back to your point, I actually think, for example, social media is chewing up a ton of our lives. I'm looking at my kids texting and all...

Jessi: They're not texting. They're Snapchatting.

Alex: They're texting. My kids don't have Snapchat yet but they're sending each other text messages and we don't butt in but I feel like there's a lot of time that goes into it and I feel like the content - I don't really look at what they're sending but I do see a lot of emojis flying around. I don't think the content is super intellectual. They read books. They still do a lot. My point is, when I write a new blog post, I want to know how many people like my stuff on Twitter and if the post I wrote like got a ton of likes on Linkedin. But that takes a lot of time. Now, if we're constantly recording ourselves, if everything is captured, do we actually have time to live our lives and what does it actually mean? Maybe that's like too meta but it sort of like ...

Jessi: In some Alex that's the question that these conversations always come back to and Liza you look like you had heard it before.

Alex: My pre-tweens, I guess spend their entire lives on Musical.ly and they're more concerned about how many likes they get that privacy. Privacy means absolutely nothing to them. They want to be broadcast. They want everyone to like them, to comment. That's so much more important to them than privacy.

Howard: The issue of living your life. I mean you go to a concert and how many phones are up in the air and people you know, not quite listening to the concert but showing that they're listening to the concert to the rest of the world and whether it's, and in that, that's a little bit of a frustration you know, where you really want to experience some things first by yourself and have your own feelings about them rather than being totally influenced by the crowd around you, and by particularly by the online crowd.


It's not that Snapchat is a documentary feature of an experience but that it is actually a facet of the experience. If I'm an entrepreneur how do I design into that world? Do I want to be designing in order to help people pull out of it or do I want to design in order to help them dive into it more?

-Jessi Hempel, Senior Writer at Wired


Jessi: Well, so there's also a school of thought that would say that actually that is now part of the experience. Snapper didn't happen. It's not that Snapchat is a documentary feature of an experience but that it is actually a facet of the experience. If I'm an entrepreneur how do I design into that world? Do I want to be designing in order to help people pull out of it or do I want to design in order to help them dive into it more?

Alex: Well, I think as an entrepreneur you have to go with the flow and you have to go into the future. It's not inconceivable to think that some of the things we will do will end up being dumb, but that's okay. We will self correct later. I am a deep believer in power and creativity of humanity but you know, you can't help to wonder, to Howard's point, if somebody's Snapchatting half of the concert, they're probably not watching it. That's their choice. As a founder if you build something that makes this go viral you'll make a lot of money so probably go do that.

Howard: Well, you could do both, right? You could make an ad blocker or you can make an ad, you know, a better ad network tool and they both will sell. So, you can pick which one you're most passionate about.

Jessi: Fair enough. Anything from the audience? Oh, right, way back here. Give us your name and affiliation quickly and then your question. There's a mic coming to you.

Audience Question 1: Thanks for speaking. My name is David Blutenthal, the CEO of Snapwave. We're a global community that combines photos and music for a more visually engaging music experiences and since we're talking about music a bit I'm wondering what you think of this as an investment space for engaging music experiences and certainly we're here today about imagery with any sort of a visual layer. Photos, VR, video, what have you.

© Robert Wright/LDV Vision Summit

© Robert Wright/LDV Vision Summit

Jessi: So, investments based for engaging musical experiences?

David: Correct. Thank you.

Howard: Well, look, the fact is that from sort of tweens-on, every few years there's a new music thing that catches on and becomes very very important to them and so it's a very ripe investment space. But it's a very fickle one and so it's a little difficult. At First Round we've mostly avoided them since Turntable.fm because you know, we've seen how fickle it can be. You get a million users in year one and then you're back down to a hundred thousand in year two and so it's difficult for us to know which one's going to become the next Snapchat and which one's going to become the next Whisperer you know or Secret or whatever. You know.

Jessi: Fair enough. You can't even quite get the name, Secret or Whisperer. Is Whisperer even still around? Yeah. Alright, another question from the audience. Back here.

Audience Question 2: Hi, I'm Derek Hoiem, CEO of Reconstruct and we hear a lot about social media. I was wondering if there are any sectors that you think are heavily under-invested in?

Alex: Not social media. I mean, what sense? I think you guys are working on mind blowingly awesome next gen applications that's going to be totally life changing. I think vision algorithms, machine learning, AI, all of these things, not just for the sake of algorithm but when applied to real life businesses are going to be completely life transformative and already have been I think. So, like any one of those areas I think is highly interesting at least to me personally.

Jessi: Cool. Let's take a question right over here.

Audience Question 3: Taking about RetailNext, a company which can zoom inside a store and then give some actionable information, how do you connect that with increase in revenue? Is it just based on some past experience or what is the connection? How do you sell these kind of things to large corporates who would want to use them?

Liza: Well, I mean, they're interested in many facets of the consumer experience, whether or not they had good or bad customer service, even in terms of merchandising - you know, what was interesting to the consumer: What did they dwell on most? What caught their eye? Those types of things are very difficult to ascertain without seeing the movement around the store. Those are the types of things that we've seen that retailers are very interested in.

Audience Question 3: Yeah, but how do you put a value to that, is my question. If i were to sell it to a company right now, I would say, how much do you want this for? Right? For free it's great but for a million dollars, where do you draw that price point?

Liza: You know, I don't. I haven't seen their latest ROI model but  I think from a merchandising perspective if you have an idea as to what's working vs. what's not working, what store layouts work as opposed to store layouts that don't work, that's huge in terms of uplift of sales.

Jessi: Okay, and there was a question behind you. Right back here. Thank you.

Audience Question 4: Thank you. My name's Jacob Loewenstein. I run MIT's VR and AR community. I'm curious. It sounds like when it comes to investing in social media there's a huge advantage if you can predict shifts in societal values. So, suddenly people value publicizing themselves, then perhaps suddenly people prefer privacy or sharing only among a niche audience. I'm curious if any of you predict or see an upcoming shift in values that might tip your hat or give you a clue as to where to invest in next?

Howard Morgan, Partner & Co-founder of First Round Capital © Robert Wright/LDV Vision Summit

Howard Morgan, Partner & Co-founder of First Round Capital © Robert Wright/LDV Vision Summit

Howard: That's a terrific question because what we do see is that roughly every two years, the tweens are the predictor. They're the ones who do the next thing and the difference today is that the kids who are becoming tweens today have pretty much been on iPads for five years and so they've had a much different growth experience then even the generation two or three years before them. They're sharing differently and I do think that, but I don't know that I can see what to predict yet but there's definitely some differences happening. One of them is the importance of music to them. You know, they're not texting anymore. As Jessie said, they're Snapchatting, they're, and they have the time to figure out ...

You know, New Yorker had a fantastic cartoon last week which showed a patient in the doctor's chair with his head exploded. It said, I was just another 40 year old trying to learn Snapchat. What you have as a tween, is time to explore an app that we as adults in general don't have, and Snapchat has no instructions  so you have to explore it to figure out what to do with it. We don't have the time to do that but this new generation will have even more language at their disposal to figure out what these things might do. So they'll explore them faster. So, I think we'll see a faster pace.

Jessi: Just to build on that, one thing that I'm very curious about is the visual language that's emerging, that this new, young generation is pioneering. I'm not fluent in reading it, let alone writing it but it feels to me that people assume that you can understand the language because it harkens back to pictography. If you could see pictures on the cave wall then certainly you can make and share images. But as I look at that language, in particular emerging on the Snapchat platform, I think that we begin to move towards a world in which there's going to be a nuanced way of writing in images that carries the same nuance that writing in language carried in the last century and we're going to have to somehow get ahead of the coding on it.

Howard: IMHO. I mean, we had to get through that, that battle with the text world which changed what we were writing right, and learn that language. And it is a new language between the way you're doing things in Snapchat, what things you're putting on people, the emojis you're putting in tech, but it's not unlearnable. It's just that kids always want a language that they can keep from their parents, from their elders for a little while.

Jessi: Fair enough, and I guess in that paradigm I qualify as the elder?

Howard: Yeah.

Jessi: Yeah, thanks. So I want, I don't want to leave this question though because it's a great question. I want to go back to sort of what cultural shifts you might see just up ahead and where they might come from.

Liza: We don't invest in consumer just because it is so fickle and we're more interested in more business models that have a longer lasting. But I think it's extremely challenging, just having tweens, seeing how they communicate and what they do and what's hot now and not. I mean, it's very even difficult as a parent to make sure that they're being safe, to keep up what they're doing safe, let alone investing in it. So, anyone who does that, hats off.

Jessi: Yeah. Fair enough.

Howard: I think the other big cultural shift is the, and I don't want to say it's attention span, but the byte size of what they're doing. You know, 50 years ago I would write letters. Multiple page letters to people and certain in social interactions. You know, what's the longest thing? I wrote a blog post about eight years ago saying that the Gresham's law of blogging was the cheap tweets would take over from the deer of blogging. Blogging takes a long time so even though the mediums are doing reasonably well, so much more is happening in this short form whether it's 140 characters or 500 characters. It's not 500 words, and that's been a real difference and that's one of the biggest differences - everything has to be said or thought through in something that could be consumed in one or two minutes.

Jessi: One or two minutes. Which is exactly how much time I have to sum up all that we have talked about in the last 40. Thank you guys so much for joining us for this conversation.

The annual LDV Vision Summit will be occurring on May 24-25, 2017 at the SVA Theatre in New York, NY.

March 31 Deadline for Competitions at LDV Vision Summit

Startup Competition:
Visual technology startups with less than $2M in funding are invited to apply to our annual LDV Vision Summit Startup Competition.

Entrepreneurial Computer Vision Challenge:
Students, professors, PhDs, enthusiasts and hackers are invited to apply to showcase brilliant computer vision projects at our Entrepreneurial Computer Vision Challenge (ECVC) by March 31.

To apply, simply fill out a form with a link to a pitch deck or recent project.

Finalists present at our 2017 LDV Vision Summit to over 500 investors, startups, and technologists as well as incredible judges like Albert Wenger of Union Square Ventures, Josh Kopelman of First Round Capital, Marvin Ziao of 500 Startups, Jenny Fielding of TechStars, Serge Belongie of Cornell Tech, and many more...

Finalists from prior years have raised funding, found partners, and hired new talent from presenting at our Vision Summit.

Subfinalists receive remote mentoring from LDV General Partner, Evan Nisselson then have the opportunity to present in NYC on May 22 to our experts in computer vision & entrepreneurship and receive complimentary tickets our Vision Summit.

Examples of visual technologies: businesses empowering photography, videography, medical imaging, analytics, robotics, satellite imaging, augmented reality, virtual reality, autonomous cars, media and entertainment, gesture recognition, search, advertising, cameras, e-commerce, sentiment analysis, and much more.

[Photograph of judges from 2016 Startup Competition including, in no particular order, Jessi Hempel, Senior Writer at Wired, Christina Bechhold, Investor at Samsung, Brian Cohen, Chairman of NY Angels, Taylor Davidson, Unstructured Ventures, Barin Nahvi Rovzar, Executive Director of R&D and Strategy at Hearst, Adaora Udoji, Chief Storyteller at Rothenberg Ventures, and others such as Josh Elman, Greylock, David Galvin, Watson Ecosystem at IBM Ventures, Jason Rosenthal, CEO at Lytro, Steve Schlafman, Principal at RRE Ventures, Alex Iskold, Managing Director at Techstars, Taylor Davidson, Unstructured Ventures, Justin Mitchell, Founding Partner at A# Capital, Richard Tapalaga, Investment Manager at Qualcomm Ventures. © Robert Wright/LDV Vision Summit]

Carbon Robotics is Growing an Unstoppable Team to Create Innovative Products After Win at LDV Vision Summit Startup Competition

Rosanna Myers, CEO & Co-founder of Carbon Robotics © Robert Wright/LDV Vision Summit

Rosanna Myers, CEO & Co-founder of Carbon Robotics © Robert Wright/LDV Vision Summit

The LDV Vision Summit is coming up on May 24-25, 2017 in New York. Through March 31 we are collecting applications to the Entrepreneurial Computer Vision Challenge and the Startup Competition.

Rosanna Myers, CEO & Co-founder of Carbon Robotics, won our LDV Startup Competition in 2016 and we had a chance to speak with her this March about their past year:

Besides winning the LDV Startup Competition, you won many awards in 2016 - Forbes 30 under 30 in Manufacturing, Best Demo Pit at Launch, CES Best Startup, Robotics Business Review Top 50 Companies in World - what have been the keys to your success over the last year?
It was a good year! I think the key to our success so far has been pretty straightforward – we focused on solving a hard problem that makes an important impact. My cofounder and I actually made that call very early on. We decided to not waste time on trivial matters, but instead to leverage our talents to help people.

From the beginning, we knew exactly who we wanted to help and why it was important. Of course, strategies and tactics change, but clarity around the core mission is what inspired us to overcome challenges. We had a lot of fun showcasing our work, but honestly I think that was the easy part.

What challenges have you overcome to achieve those successes?
When we started the company, we were told it wasn’t possible to build such a high performance robotic arm at our target price point. When we asked why, we got answers like “well, based on our distributor’s inventory and known configurations, it would be too expensive” or, my personal favorite, “our contractors told us it’s not possible.” We knew it was possible, but we also knew we had to get creative.  

For months, it was just pure grind. We dug deeply into the supply chain to learn what was easy to customize, wrote complex control software to correct for cheaper hardware, and drew inspiration from unusual disciplines. In the end, we successfully created a device that was within spec and about 10x cheaper – which felt incredible to accomplish.

An article in Inc. quoted you saying “Impossible is a mindset, too often the answers we got weren't at all based on physics, they were based on precedent - that is an important distinction.” Do you think that out-of-the-box thinking embodies a major characteristic of Carbon Robotics?
It’s essential. Derivative thinking leads to derivative products. When hiring, we screen for people who constantly challenge assumptions and make unorthodox connections, then give them freedom and support to invent. So far, our pickiness has paid off. Everyone we’ve hired is the best at what they do, but also highly cross-functional and laser-focused on shipping product.  

Being a young startup is quite clarifying in that regard – when you have to pull off the unreasonable, you need a team that’s unstoppable.  

Are you hiring at the moment/what positions looking for?
We are! Right now, we’re looking for Computer Vision and Robotics Software Engineers on AngelList.

On the CV side, we want people who understand the hard math behind behind low-level algorithm development (rather than just implementing open tools), and who have a system-level grasp of reconstruction pipelines. (A decent proxy is probably working in C++ rather than Python.) For software, we need people with deep robotics backgrounds who can translate real-world tasks into flexible behaviors.

In all cases, we’re looking for people who want to democratize robotics. A lot of what we’re doing is taking really hard problems and then building tools so that people without specialized knowledge can tackle them. We’re asking you to help us steal fire from the gods and bring it to mankind.

Why do you believe robotics is such an interesting application of Computer Vision right now? How is Carbon Robotics using CV to disrupt the traditional robotics manufacturing sector?
Robotics is one of the most high-impact applications for Computer Vision. Robotic arms today are largely dumb and blind, which majorly hamstrings their utility. Giving them eyes and a brain to better better understand their environment is key to catalyzing adoption. We’re at this amazing point in time where a whole bunch of developments in hardware and software are converging to create something damn close to magic.

There’s also enormous potential to impact people’s quality of life – everything from automating dangerous tasks to enabling assistive devices to creating the building blocks of an entirely new medium. In many ways, robotics today is like computers in the 80s or the internet in the 90s. There’s a big appetite for the first applications and a much more revolutionary shift underway.

What are you looking to accomplish in 2017?
I can’t reveal too much of what we’ve been working on, but we’ll have some exciting announcements later in the year.

Do you have any recommendations for startups in their seed stage who are applying to the LDV Startup Competition?
Definitely apply! We weren’t sure if we should apply since we were already fundraising and weren’t sure if we’d fit the criteria, but I’m so glad we did. The summit had the perfect blend of smart attendees and an intimate format, which made it easy to make meaningful connections.

Apply to our annual LDV Startup Competition by March 31, 2017.

Panel of Judges: (L to R) Jessi Hempel, Senior Writer at Wired, Christina Bechhold, Investor at Samsung, Brian Cohen, Chairman of NY Angels, Taylor Davidson, Unstructured Ventures, Barin Nahvi Rovzar, Executive Director of R&D and Strategy at Hearst, Adaora Udoji, Chief Storyteller at Rothenberg Ventures, and others such as Josh Elman, Greylock, David Galvin, Watson Ecosystem at IBM Ventures, Jason Rosenthal, CEO at Lytro, Steve Schlafman, Principal at RRE Ventures, Alex Iskold, Managing Director at Techstars, Taylor Davidson, Unstructured Ventures, Justin Mitchell, Founding Partner at A# Capital, Richard Tapalaga, Investment Manager at Qualcomm Ventures (in no particular order). © Robert Wright/LDV Vision Summit

Panel of Judges: (L to R) Jessi Hempel, Senior Writer at Wired, Christina Bechhold, Investor at Samsung, Brian Cohen, Chairman of NY Angels, Taylor Davidson, Unstructured Ventures, Barin Nahvi Rovzar, Executive Director of R&D and Strategy at Hearst, Adaora Udoji, Chief Storyteller at Rothenberg Ventures, and others such as Josh Elman, Greylock, David Galvin, Watson Ecosystem at IBM Ventures, Jason Rosenthal, CEO at Lytro, Steve Schlafman, Principal at RRE Ventures, Alex Iskold, Managing Director at Techstars, Taylor Davidson, Unstructured Ventures, Justin Mitchell, Founding Partner at A# Capital, Richard Tapalaga, Investment Manager at Qualcomm Ventures (in no particular order). © Robert Wright/LDV Vision Summit

How has winning the LDV Vision Summit Startup Competition 2016 had a lasting impact on Carbon Robotics?
I think the network is fantastic. We made several lasting connections from the LDV Vision Summit community and Evan has been a great mentor to us.

Applications to the 2017 ECVC and Startup Competition at the LDV Vision Summit are due by March 31, apply now. Our next LDV Vision Summit will take place on May 24 & 25 in NYC. (Early bird tickets at 80% discount are available until March 31).