Meet LDV Capital’s Expert In Residence, Computer Vision Leader Serge Belongie

LDV Vision Summit 2016. We are a family affair! Serge and August Belongie thanking the audience for a fantastic, inspirational gathering. ©Robert Wright

LDV Vision Summit 2016. We are a family affair! Serge and August Belongie thanking the audience for a fantastic, inspirational gathering. ©Robert Wright

We are honored to announce that computer vision and machine learning expert Serge Belongie has joined LDV Capital’s team as our first Expert in Residence [ER]. Serge is a professor of Computer Science at Cornell University and Cornell Tech and a technical leader in our field of visual technologies. He has co-founded several computer vision startups that have been acquired, and he thrives on empowering others to succeed.  

“I am excited to deepen my collaboration with Evan and LDV Capital in our mutual pursuit of empowering students, PhDs, and our ecosystem to leverage deep technical skills to solve problems and build valuable businesses that benefit society,” says Serge.

I met Serge in March 2013, and I have been inspired by his insights, expertise, passion, and modest genius since the day we met. We share a passion and curiosity for how visual technologies will empower businesses and humanity. His unique combination of industry expertise, professorial guidance, enthusiasm to empower students to build businesses, and technical advisory guidance is an extremely unique combination. We have been working closely together on our LDV Vision Summit since 2014, and we are honored to collaborate more deeply going forward.

We joined forces with others to create the successful annual LDV Vision Summit, which started as a one-day event in June 2014. It has since evolved into a two-day event with over 500 attendees, 80 speakers, 40 sessions, and 2 competitions [Startup Competition & Computer Vision Challenges]. Our summit is unique in that it gathers visionaries and cutting-edge technologies across our visual technology ecosystem, including startups, investors, CV/ML/AI researchers/professors, technology/media executives, and creators, in one place to inspire, raise capital, recruit talent, and facilitate commercial deals.

“Frequently I cross paths with talented students, researchers, and PhDs who want to explore commercialization but are overwhelmed by the options. LDV Capital, Cornell Tech, and I are always looking for ways to further empower technological advancements and our ecosystem,” says Serge.

Serge is also a co-founder of several companies, including Digital Persona (which merged with CrossMatch 2014), CarCode (which was acquired by Transport Data Systems), Anchovi Labs (which was acquired by Dropbox in 2012), and Orpix. He also serves as technical advisor to Osmo and other companies.

LDV Community Dinner, October 2014. Serge introducing himself and, as always, keeping it lively. ©Robert Wrigh

LDV Community Dinner, October 2014. Serge introducing himself and, as always, keeping it lively. ©Robert Wrigh

LDV Vision Summit 2016, Entrepreneurial Computer Vision Challenge Winner: Grokstyle, co-founded by Cornell researchers Sean Bell and Kavita Bala. Serge congratulating CEO Sean Bell. ©Robert Wright 

LDV Vision Summit 2016, Entrepreneurial Computer Vision Challenge Winner: Grokstyle, co-founded by Cornell researchers Sean Bell and Kavita Bala. Serge congratulating CEO Sean Bell. ©Robert Wright 

“As Computer Vision vision finds more success in practical domains, my research interests have shifted from the fundamentals of object recognition to the challenges of human-in-the-loop computing. I find this area fascinating since it involves humans and machines working together to solve problems that neither can solve in isolation.”

Serge is also pushing the envelope with regard to computer vision and machine learning research across multiple projects, such as:

Residual Networks Behave Like Ensembles of Relatively Shallow Networks with Andreas Veit and Michael Wilber.

Context Matters: Refining Object Detection in Video with Recurrent Neural Networks with Subarna Tripathi, Zachary Lipton, and Truong Nguyen.

Visipedia: Fine Grained Visual Categorization with Humans in the Loop, with Pietro Perona

Learning to Match Aerial Images with Deep Attentive Architectures with James Hays and Tsung-Yi Lin

Boosted Convolutional Neural Networks with Mohammad Moghimi, Mohammad Saberian, Jian Yang, Li-Jia Li, and Nuno Vasconcelos

View more research from Serge.

LDV Capital’s Experts in Residence work part time on helping us track interesting companies, collaborating on due diligence, providing valuable advice to portfolio companies, analyzing trends with our team, and continuing to collaborate closely on our annual LDV Vision Summit. We look forward to adding more experts to our LDV Capital team to continue empowering people, universities, and our ecosystem to support technical people in their pursuit to commercialize deep technical research. Many of LDV Capital’s portfolio companies leverage deep technical research, and they are always recruiting smart people.

LDV Vision Summit 2014.  How will image recognition disrupt businesses and empower humanity? How can we inspire more researchers to bring their visions to market? Moderator: Evan Nisselson Panelists: Serge Belongie, Professor, Cornell NYC Tech, Computer Vision Expert; Gary Bradski, Magic Leap, VP Computer Vision & Machine Learning; and Moshe Bercovich, Shutterfly, GM Israel. ©Dan Taylor

LDV Vision Summit 2014.  How will image recognition disrupt businesses and empower humanity? How can we inspire more researchers to bring their visions to market? Moderator: Evan Nisselson Panelists: Serge Belongie, Professor, Cornell NYC Tech, Computer Vision Expert; Gary Bradski, Magic Leap, VP Computer Vision & Machine Learning; and Moshe Bercovich, Shutterfly, GM Israel. ©Dan Taylor

Serge Belongie received a B.S. (with honors) in electrical engineering  from Caltech in 1995 and a PhD in Electrical Engineering and Computer Sciences from Berkeley in 2000. While at Berkeley, his research was supported by an NSF Graduate Research Fellowship. From 2001 to 2013, he was a professor in the Department of Computer Science and Engineering at University of California, San Diego. He is currently a professor at Cornell Tech and in the Department of Computer Science at Cornell University. His research interests include computer vision, machine learning, crowdsourcing and human-in-the-loop computing. He is also a co-founder of several companies, including Digital Persona, Anchovi Labs, and Orpix. He is a recipient of the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review “Innovators Under 35” Award, and the Helmholtz Prize for fundamental contributions in computer vision.

 

 

How do you know if your audience is laughing, talking or sleeping?

Individual real-time audience analysis leverages infrared and optical cameras to detect whether viewers are watching television while lying under a blanket, lying down on the couch, or laughing.

Individual real-time audience analysis leverages infrared and optical cameras to detect whether viewers are watching television while lying under a blanket, lying down on the couch, or laughing.

Historically, television audience viewing analyses have been based on households rather than individuals and the currency of TV advertising has been based on 30-year-old technology. Audience insights from computer vision analyses will deliver more valuable individual attention-based viewing engagement statistics.

The mission of TVision Insights is to fix the broken parts of television advertising. LDV Capital is excited to invest in the TVision team alongside Accomplice, Jump Capital, ITOCHU Technology Ventures, and other investors.

TVision is an audience measurement company pioneering the way in which brand advertisers, TV networks, and over-the-top content [OTT] platforms measure attention. Using TVision’s data showing how many seconds viewers actually pay attention to what they are watching, media teams can optimize their allocations and networks can improve their programming. Advertisers can see higher returns on ad spending, and networks can make better, more engaging programming.

TVision relies on cameras and computer vision to deliver real-time individual viewer analyses for advertisers and content programmers. They leverage proprietary facial recognition technology that allows users to understand engagement and sentiment in viewers’ natural watching environments.

Nielsen has been around for years, and to date they track household viewing using 30-year-old technology.  

Yan Liu, co-founder and CEO, said, “When I was running my own digital ad agency, I was able to leverage lots of data and tools for digital media. Then I started to work with other agencies that handled TV ads. I was surprised that there is very little data available for optimization. By providing better data to the TV world, we can make it much more efficient."

TVision’s per-person ratings panel size has surpassed all its competitors to become the largest in Boston, which is the eighth largest designated market area (DMA) in the country, comprising 600 households and more than 2,000 people. TVision currently captures and reports on viewer attention across 285 channels and 99% of the content on Netflix, Amazon, and Hulu.

The living room is the new research lab for advertisers and media programming companies, who can rely on this data instead of using expensive, non-real-time focus groups. Advertisement and content measurement can be tested in real time and on an individual basis.

At what point in a show do people leave the room, and when are they distracted by other people in the room? To help answer these questions and more, TVision also captures real-time data on actual in-room engagement based on demographic, time, and program, such as during the first 2016 presidential debate.

TVision reported that viewers paid the most attention during the debate when Hillary Clinton responded to Donald Trump, “You live in your own reality.” TVision captured viewers’ attention based on key demographics, finding that Hillary Clinton scored +26% with Hispanics, Donald Trump received +8% more attention from male viewers, Hillary led +13% with African Americans and Hillary got +3% more attention from female viewers.

At the 2016 Oscars, TVision reported that viewers paid the most attention when Lady GaGa received a standing ovation on stage and at home as she addressed campus assault. The top "smiled moment" was when Leo won the award for Best Actor in a Leading Role, garnering a Smile Index rating of  2.75.

Check out TVision’s other exciting audience attention deep dives for Super Bowl 50, the Patriots vs. Broncos AFC championship game, the 2016 American Music Awards , and the 2015 Emmys.

TVision never stores images or videos, so viewers do not need to  be concerned that cameras are watching them while they watch television.   

Dan Schiffman, co-founder and CRO, said, “We take privacy very seriously. It is honestly our primary concern. We do not store or transmit any images or videos ever. We analyze the living room scenario in real time, store the data as 0s and 1s with no personally identifiable information, and then upload that data to our servers for analysis. We own our data, and we rely on our panel homes to provide it. We would never compromise that trust.”  

LDV Vision Summit 2016: Dan Schiffman, Co-Founder & CRO, TVision Insights

LDV Capital invests in people who are building visual technologies to solve problems and empower businesses and humanity. The TVision Insights team is another great example of domain experts leveraging visual technology to solve problems and build a valuable business. We are honored to collaborate with them.

Programmers and advertisers can finally understand in real time when their audiences are laughing, crying, sleeping, or mad. We hope content production, advertising, and entertainment around the world will become more contextually relevant and inspiring for all viewers.

TVision Insights is hiring.

Clarifai raises $30M Series B - Delivering The Power Of Artificial Intelligence Into Everyone’s Hands

We are proud to share the news that our portfolio company Clarifai has raised a $30M Series B financing led by Matt Murphy at Menlo Partners.

“One of the biggest reasons for our fundraise is to “continue enabling anyone in the world to train and use AIsays Matthew Zeiler, CEO & Founder, Clarifai.

We first invested in Matthew, their team and the Clarifai vision in June 2014 when “artificial intelligence” was still a matter of science fiction. We invested again in April 2015 in their Series A led by Albert at Union Square Ventures and again in their Series B alongside Menlo Partners, Union Square Ventures, Lux Capital, Qualcomm Ventures and other investors.

The Clarifai team has made solid progress adding marquee customers and expanding their developer community. Success never happens overnight and it is always critical to gather brilliant people who collaborate toward a shared vision.

Clarifai was founded in 2013 to solve real-world problems with Artificial Intelligence, starting with visual recognition.  Matthew was working on early Clarifai algorithms at NYU along with his professor Rob Fergus. Clarifai won top awards in the ImageNet Challenge in 2013 beating out teams from IBM, Adobe and other major companies (results).  In June 2014, Rob Fergus, Research Scientist at Facebook AI & NYU Professor gave a presentation highlighting the potential for Clarifai at our annual LDV Vision Summit.

May 2016: Matthew Zeiler, CEO of Clarifai gives a presentation at our LDV Vision Summit “The revenue potential is tremendous for accurately, automatically and efficiently keywording all the videos in the world.”

In a blog post announcing Clarifai’s Series B financing today, Matt outlined their Clarifai vision:

We want to teach computers how to see the world like humans. Recognizing objects is one piece of the puzzle, but by itself, it’s not very “human.” When people see the world, they don’t just see objects. They see complexity, context, and relationships that make up a greater understanding - an understanding that differs from person to person.

So, when we’re building artificial intelligence tools at Clarifai, what we’re really thinking about is human intelligence and how we can amplify it to help solve real-world problems.

We have a really exciting roadmap that continues to position us as the independent A.I. company out there, and the only bottleneck on executing is the number of people in the company. We plan to grow all functions, from research to engineering to developer evangelists to sales and marketing.”  Train your own visual recognition model and search any image with custom training and visual search.

Everyday more of our world is optimized and empowered by people building visual technology businesses. These businesses analyze visual data via computer vision, machine learning and artificial intelligence. Self-driving cars, baby monitors, shopping recommendations, news feeds, mapping, personal assistants, biometrics, gesture recognition and this will exponentially impact every business vertical and become a core part of humanity.  

Matt Zeiler at one of our monthly LDV Community Dinners. ©Ron Haviv

Matt Zeiler at one of our monthly LDV Community Dinners. ©Ron Haviv

We are thrilled to continue collaborating and investing in Matt, the Clarifai team and their vision!

 

Who Created That Image? Solving the online content identity problem leveraging the blockchain.

Love The Living of Life: Bumbershoot Festival Seattle, WA 9/4/95 ©Evan Nisselson  

Love The Living of Life: Bumbershoot Festival Seattle, WA 9/4/95 ©Evan Nisselson
 

Every day, billions of images, songs, videos, and written works are created and shared online to communicate to loved ones, to friends, and to the masses. The most interesting content is then shared exponentially across multiple social networks. Each time this content is shared, it loses important metadata, such as the attribution for who created it and important associated caption information.

I have been a photographer since I was 13 years old. I have worked as a photo agent and a photo editor, and many of my friends create original visual content.  The holy grail for all creators and content owners, including myself, is the ability to track our creativity so we can better understand how and where it is being enjoyed and create future monetization opportunities—or at least have our names associated with our content wherever it is viewed.

As the commercial web has grown over the last 20 years, many different companies and solutions have tried to deliver for digital rights management solutions, but none have succeeded yet.

Love The Living Of Life: Billiards, San Francisco, CA 4/26/96 ©Evan Nisselson

Love The Living Of Life: Billiards, San Francisco, CA 4/26/96 ©Evan Nisselson

For example, imagine that a photographer’s image is published on the National Geographic website, then shared to Facebook, then copied and posted to Reddit, then published on an individual’s blog, and then posted to Pinterest, Instagram, and back to Reddit. The creator’s attribution and valuable caption data would not continue to be associated with that image each time the photo is shared to a new website on the Internet.

Mediachain Labs is working to solve this content identity problem by building an open, universal media library leveraging the blockchain. Jesse, Denis, and their team are building Mediachain to automatically connect media to its creator and relevant metadata.

LDV Capital is excited to invest in the Mediachain Labs team and this ambitious goal along with Union Square Ventures, Andreessen Horowitz, RRE Ventures, and other investors.

The Mediachain Labs team says, “What if the information about all media ever created was completely open, and you could instantly know everything about whatever you were viewing, watching, reading, or listening to — who made it, what it was, where it originated — regardless of how you came across it?”

Mediachain connects media to information through the content itself. It combines a decentralized media library with content identification technology to enable collaborative registration, identification, and tracking of creative works online. In turn, developers can automate attribution, preserve history, provide creators and organizations with rich analytics showing how their content is being used, and even create a channel for exchanging value directly through content no matter where it is.

Rick Smolan, photojournalist and creator of the “Day in the Life” series, said, “Mediachain Labs is tackling a serious problem that’s faced creators from day one: how can I make sure I am credited for the work I create? Mediachain’s Blockchain technology tethers ownership to content, ensuring to both creators and to publishers—in any medium—that proper attribution (and hopefully financial remuneration) is perpetually connected to each work of art. This has been a long time coming, and Mediachain Labs seems to have broken the code.”  

Rick elaborates on the story behind his Muhammad Ali photo, which could provide valuable additional metadata associated with the photo when it appears online. Rick said, “I ran into Muhammad Ali when I stepped into an elevator in Tokyo’s Keio Plaza Plaza hotel in 1976 with my friend David Burnett, an incredibly talented photojournalist who had invited me to become part of his fledgling photo agency [Contact Press Images, which he co-founded with photo impresario Robert Pledge].

Muhammad Ali, 1976 ©Rick Smolan

Muhammad Ali, 1976 ©Rick Smolan

In the elevator with Ali was Howard Bingham, Ali’s personal photographer. David and Howard were old friends and in the space of that 30 second elevator ride Howard told us that in a few hours he and Ali were headed to Korea to tour US Army bases for a week.

Howard told us they two extra seats on the plane and invited us to join them on the tour (that’s how things worked back then!). So off we went on a weeklong fascinating behind the scenes tour of Korea with the champ. At every base Ali would get in the ring to spar with a few soldiers.

It was surreal and I kept thinking if we had pushed the elevator button 30 seconds later none of this would happened. I always thought about these wonderful moments of serendipity as the “fate stream”.

Mediachain’s codebase is completely open-source leveraging the blockchain. This makes it an ideal environment for collaboration and innovation between media organizations, distribution platforms, and independent creators and developers who want to retain control over their data while broadening their reach.

Millions of images and related metadata records have been contributed to Mediachain by participating organizations including The Museum of Modern Art (MoMA), Getty Images, the Digital Public Library of America (DPLA), and Europeana.

Dan Taylor, Founder and Principal Photographer at Heisenberg Media said “This sounds like a great solution to a time old problem. I can imagine this cutting hours off my standard routine of tracking and gaining proper credit for works I've produced. Probably one of the most innovative uses of the blockchain I've seen yet. Can't wait to give this a go myself!”  

Dan elaborates on how one of his photos is re-published exponentially online “I was on the job covering the Web Summit in Dublin, Ireland, and just happened to be walking through the audience when I spun around and saw this. Luckily, I had a fisheye in my pocket and quickly swapped over to capture the scale of what I was viewing.

As this image was selected by the Web Summit as one of their top marketing images (and one they’re still using in some collateral today), naturally, it became quite a popular one for bloggers, journalists, and general fans of the event. In most cases, I don’t blame the second, third, and fourth level individuals who use the image, because they have no way of tracking down who created the work.

Web Summit in Dublin. ©Dan Taylor/Heisenberg Media

Web Summit in Dublin. ©Dan Taylor/Heisenberg Media

In fact, when doing a Google Images search, there’s no reference to me until the 19th page. And you know the best place to hide a body? Page 2 of Google search. Again, I don’t blame those that use it, just wish there was a way that they could know who the original photographer was/is. If the Mediachain solution can retain and provide that information, we really are looking at the holy grail.”

Developers who are interested in the project can find out more and get involved by joining the community on GitHub or through their public Slack.  

Other People’s Weddings: Emma & Josh’s Wedding, Brooklyn, NY 1/2/99 ©Evan Nisselson

Other People’s Weddings: Emma & Josh’s Wedding, Brooklyn, NY 1/2/99 ©Evan Nisselson

Love The Living Of Life: Jim and his son Matt, Glacier, WA 9/13/95 ©Evan Nisselson

Love The Living Of Life: Jim and his son Matt, Glacier, WA 9/13/95 ©Evan Nisselson

Content creators and owners should also get involved by reaching out to the team at Mediachain Labs.

 

Grokstyle wins LDV Vision Summit 2016 Entrepreneurial Computer Vision Challenge

Entrepreneurial Computer Vision Challenge Winner: Grokstyle, Sean Bell, CEO & Co-Founder ©Robert Wright/LDV Vision Summit

Entrepreneurial Computer Vision Challenge Winner: Grokstyle, Sean Bell, CEO & Co-Founder ©Robert Wright/LDV Vision Summit

Our annual LDV Vision Summit has two competitions. Finalists receive a chance to present their wisdom in front of hundreds of top industry executives, venture capitalists, top industry executives and companies recruiting. Winning competitor also wins $5,000 Amazon AWS credits.

1. Startup competition for promising visual technology companies with less than $1.5M in funding?

2. Entrepreneurial Computer Vision Challenge (ECVC) for any Computer Vision and Machine Learning students, professors, experts or enthusiasts working on a unique solution to empower businesses and humanity.

Competitions are open to anyone working in our visual technology sector such as: empowering photography, videography, medical imaging, analytics, robotics, satellite imaging, computer vision, machine learning, artificial intelligence, augmented reality, virtual reality, autonomous cars, media and entertainment, gesture recognition, search, advertising, cameras, e-commerce, visual sensors, sentiment analysis, and much more.

The Entrepreneurial Computer Vision Challenge provides contestants the opportunity to showcase the technology piece of a potential startup company without requiring a full business plan. It provides a unique opportunity for students, engineers, researchers, professors and/or hackers to test the waters of entrepreneurism in front of a panel of judges including top industry venture capitalists, entrepreneurs, journalists, media executives and companies recruiting.

In the 2014 and 2015 Summits the ECVC was organized into predefined challenge areas (e.g., "estimate the price of a home or property," "estimate how often a photo will be re-shared") plus a "wildcard" category.

Initially we proceeded in the same way for the 2016 ECVC, but we found that the most exciting entries were overwhelmingly the wildcards, so we decided to go all-in on that category. Attendees at this year's summit bore witness to the outstanding lineup of finalists, including GrokStyle (visual understanding for interior design) from Cornell, MD.ai (intelligent radiology diagnostics) from Weill Cornell, DeepALE (semantic image segmentation) from Oxford University and Vision+Love (automated kinship prediction) from Carnegie Mellon.

Congratulations to our 2016 LDV Vision Summit Entrepreneurial Computer Vision Challenge Winner: Grokstyle, Sean Bell, CEO & Co-Founder  

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

What is GrokStyle?

GrokStyle, co-founded by Cornell researchers Sean Bell and Kavita Bala, is developing state-of-the-art visual search.  Given any photo, we want to tell you what products are in it, and where you can buy them.  We want to help customers and retailers connect with designers, by searching for how others have used and combined furniture and decor products.  The world is full of beautiful design -- we want to help you find it.

As a PhD Candidate - what were your goals for attending our LDV Vision Summit? Did you attain them?

My goals were to understand the startup space for computer vision, to connect with potential collaborators, find companies interested in building on our technology, and generally get our name out there so we can have a running start.  The event definitely far exceeded our expectations and we attained all of our goals.

Why did you apply to our LDV Vision Summit ECVC competition? Did it meet or beat your expectations and why?

Serge Belongie recommended that we apply, and saw the value that the summit would have for us.  We were excited, but certainly did not expect the amount of positive feedback, support, and connections that we made.  My pocket is overflowing with business cards, and I’m excited to continue these conversations as we turn our technology into a company.

Why should other computer vision, machine learning, and artificial intelligence researchers attend next year?

I think that all CV/ML/AI researchers should attend events like the LDV Vision Summit.  The talks here are interesting and varied, and it is inspiring to see how algorithms and computer vision research are having a real impact in the world. You don’t get that at academic conferences like CVPR.

We try to have an exciting cross section of judges from computer vision experts, entrepreneurs, investors and journalists. Asking a question is Barin Nahvi Rovzar, Hearst, Exec. Dir., R&D & Strategy. Judges included: Serge Belongie (Prof., Cornell Tech, Computer Vision), Howard Morgan (First Round, Partner & Co-Founder), Gaile Gordon (Enlighted, Sr. Director, Technology), Jan Erik Solem (Mapillary, CEO), Larry Zitnick (Facebook, AI Research, Research Lead), Ramesh Jain (U. California, Irvine, Prof., Co-Founder Krumbs), Evan Nisselson - LDV Capital, Partner),  Nikhil Rasiwasia (Principal Scientist, Snapdeal), Beth Ferreira (WME Venture Partners, Managing Partner), Stacey Svetlichnaya (Flickr, Software Engineer, Vision & Machine Learning), Adriana Kovashka (U. of Pittsburgh, Assist. Professor Dept. Computer Science) ©Robert Wright/LDV Vision Summit

We try to have an exciting cross section of judges from computer vision experts, entrepreneurs, investors and journalists. Asking a question is Barin Nahvi Rovzar, Hearst, Exec. Dir., R&D & Strategy. Judges included: Serge Belongie (Prof., Cornell Tech, Computer Vision), Howard Morgan (First Round, Partner & Co-Founder), Gaile Gordon (Enlighted, Sr. Director, Technology), Jan Erik Solem (Mapillary, CEO), Larry Zitnick (Facebook, AI Research, Research Lead), Ramesh Jain (U. California, Irvine, Prof., Co-Founder Krumbs), Evan Nisselson - LDV Capital, Partner) Nikhil Rasiwasia (Principal Scientist, Snapdeal), Beth Ferreira (WME Venture Partners, Managing Partner), Stacey Svetlichnaya (Flickr, Software Engineer, Vision & Machine Learning), Adriana Kovashka (U. of Pittsburgh, Assist. Professor Dept. Computer Science) ©Robert Wright/LDV Vision Summit

What was the most valuable part of your LDV Vision Summit experience aside from winning the competition?

The most valuable part of the summit was connecting with three different companies potentially interested in building on our technology, and with four different potential investors/advisors.  Last year, a key potential collaborator had presented at LDV Vision Summit, looking for computer vision researchers to solve challenging problems in visual search, interior design, and recognition.  This year we were able to connect and say “we solved it!”

Sean Bell, CEO & Co-Founder of Grokstyle ©Robert Wright/LDV Vision Summit

Sean Bell, CEO & Co-Founder of Grokstyle ©Robert Wright/LDV Vision Summit

Do you have any advice for other researchers & PhD candidates that are thinking about evolving their research into a startup business?

My advice would be to keep potential commercial applications in mind, early on in the project, so that what you end up with at the end is easier to take out of the lab and sell to the world.  For me, one of the most challenging aspects of research is deciding which problems are solvable and which are worth solving -- if you are interested in startups, this is even more important.  There is the extra step of understanding who cares and who wants to use it.

What was the timeline for you to take your idea for your research to evolving it into a startup plan?

We presented a research paper at SIGGRAPH 2015 about our ideas from last year.  It has taken us a year to flesh out the work, develop it from a research prototype to a product prototype.  But there is still a lot to do.  I am graduating in a few months, and Prof. Kavita Bala is joining full time on sabbatical.  We plan to hit the ground running this summer with our engineer Kathleen Tuite, and two interns we are taking on.  As technologists, we are looking to partner with business people to take the lead on evaluating which markets and customers can benefit the most from our technology.  Starting in the fall, we plan on fundraising to help scale up our technical infrastructure.

Judges for our Entrepreneurial Computer Vision Challenges ©Robert Wright/LDV Vision Summit

Judges for our Entrepreneurial Computer Vision Challenges ©Robert Wright/LDV Vision Summit


 


 

Carbon Robotics wins LDV Vision Summit 2016 Startup Competition

Rosanna Myers, Co-Founder & CEO Carbon Robotics during her 4 minute startup competition presentation.   ©Robert Wright/LDV Vision Summit

Rosanna Myers, Co-Founder & CEO Carbon Robotics during her 4 minute startup competition presentation.  
©Robert Wright/LDV Vision Summit

Our annual LDV Vision Summit has two competitions. Finalists receive a chance to present their wisdom in front of hundreds of top industry executives, venture capitalists, top industry executives and companies recruiting. Winning competitor also wins $5,000 Amazon AWS credits.

1. Startup competition for promising visual technology companies with less than $1.5M in funding?

2. Entrepreneurial Computer Vision Challenge for any Computer Vision and Machine Learning students, professors, experts or enthusiasts working on a unique solution to empower businesses and humanity.

Competitions are open to anyone working in our visual technology sector such as: empowering photography, videography, medical imaging, analytics, robotics, satellite imaging, computer vision, machine learning, artificial intelligence, augmented reality, virtual reality, autonomous cars, media and entertainment, gesture recognition, search, advertising, cameras, e-commerce, visual sensors, sentiment analysis, and much more.

Each year we review over 100 applications to select about 20 sub-finalists. Sub-finalists receive remote presentation coaching from Evan Nisselson and invitation to our final mentoring and judging by a group of experts in person before the Summit. Finalists are invited to present on stage at the LDV Vision Summit in front of the audience and judges.

Finalists included: Reconstruct, SmartPlate, GeoCV, Simile, Shelfie, Faception and Carbon Robotics.  

Congratulations to our 2016 Startup Competition Winner: Carbon Robotics, Rosanna Myers, Co-Founder & CEO, Carbon Robotics.

[L-R] Rosanna of Carbon Robotics, Rebecca of CakeWorks, Serge & August Cornell Tech, & Evan of LDV Capital ©Robert Wright/LDV Vision Summit

[L-R] Rosanna of Carbon Robotics, Rebecca of CakeWorks, Serge & August Cornell Tech, & Evan of LDV Capital
©Robert Wright/LDV Vision Summit

We asked Rosanna some questions about her goals and experience at our LDV Vision Summit.

What were your goals for attending our LDV Vision Summit? Did you attain them?

We were excited about coming to the LDV Vision Summit for a few reasons. First, we wanted to learn from people working on interesting and disparate aspects of visual technologies. It's usually the combination of disciplines and backgrounds that yields the most creative results, so we were attracted to the theme.

We also wanted to connect with potential recruits and investors in NYC since we're based in SF and so hadn't ever connected with the ecosystem. The experience was great for that. Getting to pitch on the main stage was amazing, because it made it easy for people to learn about what we're working on. After the pitch, we were approached by tons of high-quality engineers and potential partners, so it was a great success.

We try to have an exciting cross section of judges from Computer Vision Experts, Entrepreneurs, Investors to Journalists. We typically have more judges than other events because we strongly believe more data will deliver better results. We also hope to have more opportunities for our competitors to raise capital when relevant. Startup Competition judges:  Josh Elman (Greylock, Partner), Brian Cohen (NY Angels, Chairman), Jessi Hempel (Wired, Senior Writer), David Galvin (IBM Ventures, Watson Ecoystem), Christina Bechhold (Samsung, Investor), Jason Rosenthal (Lytro, CEO), Susan McPherson (McPherson Strategies, CEO), Steve Schlafman (RRE Ventures, Principal), Taylor Davidson (Unstructured Ventures), Justin Mitchell (Founding Partner, A# Capital), Adaora Udoji (Rothenberg Ventures, Chief Storyteller), Varun Jain (Qualcomm Ventures, Investment Manager), Josh Weisberg (Microsoft, Principal PM, Computational Photography), Tamara Berg  (UNC, Chapel Hill, Assist. Prof., Computer Vision). ©Robert Wright/LDV Vision Summit

We try to have an exciting cross section of judges from Computer Vision Experts, Entrepreneurs, Investors to Journalists. We typically have more judges than other events because we strongly believe more data will deliver better results. We also hope to have more opportunities for our competitors to raise capital when relevant. Startup Competition judges:  Josh Elman (Greylock, Partner), Brian Cohen (NY Angels, Chairman), Jessi Hempel (Wired, Senior Writer), David Galvin (IBM Ventures, Watson Ecoystem), Christina Bechhold (Samsung, Investor), Jason Rosenthal (Lytro, CEO), Susan McPherson (McPherson Strategies, CEO), Steve Schlafman (RRE Ventures, Principal), Taylor Davidson (Unstructured Ventures), Justin Mitchell (Founding Partner, A# Capital), Adaora Udoji (Rothenberg Ventures, Chief Storyteller), Varun Jain (Qualcomm Ventures, Investment Manager), Josh Weisberg (Microsoft, Principal PM, Computational Photography), Tamara Berg  (UNC, Chapel Hill, Assist. Prof., Computer Vision). ©Robert Wright/LDV Vision Summit

Judge: Brian Cohen, NY Venture Partners, Founding Partner. NY Angels, Chairman ©Robert Wright/LDV Vision Summit

Judge: Brian Cohen, NY Venture Partners, Founding Partner. NY Angels, Chairman ©Robert Wright/LDV Vision Summit

Why did you apply to our LDV Vision Summit competition? Did it meet or beat your expectations and why?

Same reason as above. We wanted to come to the Summit to meet with and learn from people and we wanted to pitch on the stage to amplify signal. It definitely exceeded expectations. I ran out of business cards in the first 20 minutes after pitching and had really interesting conversations.

What were the most valuable parts of your LDV Vision Summit experience aside from winning the competition?

I would say that the best part was forging new relationships. Talks are great, but they can be watched asynchronously. The magic of having events with tons of interesting and creative people is the engineered serendipity. In a few hours, we met with computer vision experts, storytellers, roboticists, writers, investors, students, designers, and even a manufacturer. You just can't do that with a LinkedIn search. Business is really about people and it's nice to develop relationships organically.

Did you benefit from the pre-summit competition mentoring? If yes, how?

Yes, the pre-summit mentoring helped us hone our messaging, which I think was instrumental in helping us tell our story effectively. We also liked getting to know the LDV Vision Summit team personally, albeit briefly, and we're sorry we had to leave the city so quickly. Hopefully we can connect again very soon once everyone recovers.

 Josh Weisberg, Microsoft, Principal PM, Computational Photography asking a question. ©Robert Wright/LDV Vision Summit

 Josh Weisberg, Microsoft, Principal PM, Computational Photography asking a question. ©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Any advice to other entrepreneurs that might be thinking about applying to our next Startup Competition? Why should they apply?

Definitely apply, even if you are not sure how in-thesis you are. When we applied, I wasn't sure if we would be too tangential, because while we leverage a lot of computer vision to make applications easy and intuitive, we're not a computer vision company per se. We build robots and we build a platform. However, what we learned through the process is that LDV Summit is about the ecosystem and the impact, which can take many forms.

My advice if you get selected to pitch is to get to know the other founders. They are likely awesome people who are going through a lot of the same things you are and who have a high potential to become close friends and allies. It's called a competition and they do select a winner, but I think “winning” at these events is way more people than titles.  

Power To The People! You Made Our LDV Vision Summit A Success. Thank you!

Investing in visual technology businesses panel: Steve Schlafman/RRE Ventures, Josh Elman/Greylock Partners, Allison Goldberg/Time Warner Investments & moderated by Evan Nisselson/LDV Capital. ©Robert Wright/LDV Vision Summit

Investing in visual technology businesses panel:
Steve Schlafman/RRE Ventures, Josh Elman/Greylock Partners, Allison Goldberg/Time Warner Investments & moderated by Evan Nisselson/LDV Capital. ©Robert Wright/LDV Vision Summit

Wow - another fantastic Summit thanks to all of you brilliant people!

YOU are why our annual LDV Vision Summit gathering is special and a success every year. Thank You!

We are honored that you fly in from around the world each year to share insights, inspire, do deals, recruit, raise capital and help each other succeed!  

Congratulations to our competition winners:
- Startup Competition:  
Carbon Robotics, Rosanna Myers, Co-Founder & CEO, Carbon Robotics.
Entrepreneurial Computer Vision Challenge: Grokstyle, Sean Bell, CEO & Co-Founder

Startup Competition Winner:  Carbon Robotics, Rosanna Myers, Co-Founder & CEO, Carbon Robotics. Rosanna says "The magic of having events with tons of interesting and creative people is the engineered serendipity. In a few hours at the LDV Vision Summit, we met with computer vision experts, storytellers, roboticists, writers, investors, students, designers, and even a manufacturer. You just can't do that with a LinkedIn search. Business is really about people and it's nice to develop relationships organically. ©Robert Wright/LDV Vision Summit

Startup Competition Winner:  Carbon Robotics, Rosanna Myers, Co-Founder & CEO, Carbon Robotics.
Rosanna says "The magic of having events with tons of interesting and creative people is the engineered serendipity. In a few hours at the LDV Vision Summit, we met with computer vision experts, storytellers, roboticists, writers, investors, students, designers, and even a manufacturer. You just can't do that with a LinkedIn search. Business is really about people and it's nice to develop relationships organically. ©Robert Wright/LDV Vision Summit

Entrepreneurial Computer Vision Challenge Winner: Grokstyle, Sean Bell, CEO & Co-Founder We had a blast this week!  The LDV Vision Summit was the most rewarding event we have attended since starting GrokStyle, by a large margin.  The talks were interesting and varied, and it was inspiring to see the kinds of real-world applications of computer vision that you might not see at academic conferences like CVPR. We are grateful for the opportunity to present our startup, and our presentation allowed us to connect with many new potential customers, investors, business partners, and collaborators.  It’s given us a running start to take our technology out of the lab and into the world. ©Robert Wright/LDV Vision Summit

Entrepreneurial Computer Vision Challenge Winner: Grokstyle, Sean Bell, CEO & Co-Founder
We had a blast this week!  The LDV Vision Summit was the most rewarding event we have attended since starting GrokStyle, by a large margin.  The talks were interesting and varied, and it was inspiring to see the kinds of real-world applications of computer vision that you might not see at academic conferences like CVPR. We are grateful for the opportunity to present our startup, and our presentation allowed us to connect with many new potential customers, investors, business partners, and collaborators.  It’s given us a running start to take our technology out of the lab and into the world. ©Robert Wright/LDV Vision Summit

A special thank you to Rebecca Paoletti and Serge Belongie as the summit would not exist without collaborating with them!

Beyond the fascinating sessions, there is the serendipity and the inspirational networking that leaves everyone wanting more. Until next year.” Paul Melcher, Kaptur, Editor

The quotes below from our community is why we created our LDV Vision Summit. We could not have succeeded without the tremendous support from all of our partners and sponsors:

Organizers:
Presented by Evan Nisselson, LDV Capital
Video Program: Rebecca Paoletti, CakeWorks, CEO
Computer Vision Program: Serge Belongie, Cornell Tech
Computer Vision Advisor: Jan Erik Solem, Mapillary  
Universities: Cornell Tech, School of Visual Arts, International Center of Photography
Sponsors: Amazon AWS, Facebook, FLIR Systems, GumGum, IDA Ireland, Microsoft Research, Qualcomm,  Vidlet
Media Partners: Kaptur, VizWorld
Coordinators Entrepreneurial Computer Vision Challenge: Andreas Veit, Cornell University, Doctor of Philosophy, Computer Science and Oscar Beijbom, UC Berkeley, Postdoctoral Researcher  

Day 1 Panel: Visual Sensor Networks Will Empower Businesses & Humanity. More and more visual sensors are being leveraged in the commercial and home markets. From energy and real-time data management in smart commercial buildings, to home video security and monitoring our babies when they sleep. How will they empower businesses and humanity? [Front to Back] Moderator: Evan Nisselson, LDV Capital Panelists: Gaile Gordon, Enlighted, Senior Director Technology,  Jan Kautz, NVIDIA, Director of Visual Computing Research, Chris Rill, Canary, CTO & Co-Founder ©Robert Wright/LDV Vision Summit

Day 1 Panel: Visual Sensor Networks Will Empower Businesses & Humanity.
More and more visual sensors are being leveraged in the commercial and home markets. From energy and real-time data management in smart commercial buildings, to home video security and monitoring our babies when they sleep. How will they empower businesses and humanity? [Front to Back] Moderator: Evan Nisselson, LDV Capital Panelists: Gaile Gordon, Enlighted, Senior Director Technology,  Jan Kautz, NVIDIA, Director of Visual Computing Research, Chris Rill, Canary, CTO & Co-Founder ©Robert Wright/LDV Vision Summit

"LDV has been ahead of the pack in identifying and analyzing the visual tech space (far beyond just AR and VR) as one that is becoming an increasingly important and viable theme for many businesses. The convening of researchers, investors, and companies that are literally inventing our future is an immersive and instructive way to spend two days."  Barin Nahvi Rovzar, Hearst, Executive Director, R&D & Strategy

"LDV Vision Summit is one of those rare events that brings a very focused group of people together to talk about something that they all care about - computer vision & visual technologies. If you are investing or working in the space, you should definitely attend to meet other experts and startups building the next great products and companies." Josh Elman, Greylock Partners, Partner

“It was my first time at a summit as a student, and it was quite an eye-opening experience: lots of interesting people to meet with game-changing ideas in the works. I feel like it is a necessary bridge to even out the disparities between academia and industry in this rapidly growing field, and the opportunity to network with some of the greatest names in the field is something that should definitely not be overlooked!” Divyaa Ravichandran, Carnegie Mellon, Research Assistant

"LDV Vision Summit exposed me to some amazing entrepreneurs and thought leaders thinking about the world through a different perspective than any other tech conference I've been to.  Anyone that wants to get a glimpse at the future of how we process everything around us would benefit from attending." Ed Laczynski, Zype, CEO & Co-Founder

“The LDV Vision Summit is computer mad scientists meets visual storytellers with world class investors lurking in every corner. Two days that will stretch your brain and open your eyes to countless emerging possibilities in the imaging world.” Brian Storm, Founder & Executive Producer, MediaStorm

Day 2 Keynote: An Image Is Really Hundreds Of Data Points That Tell Us Who We Are. Anastasia Leng, Picasso Labs, CEO & Founder Anastasia worked many years at Google and is a serial entrepreneur leveraging technology to better understand visual content. Inside every image lies hundreds of unique data points that provide priceless information about your audience and their revealed visual preferences. Find out how technology can unearth this data to help you make smarter creative decisions and improve your visual strategy.   ©Robert Wright/LDV Vision Summit

Day 2 Keynote: An Image Is Really Hundreds Of Data Points That Tell Us Who We Are. Anastasia Leng, Picasso Labs, CEO & Founder
Anastasia worked many years at Google and is a serial entrepreneur leveraging technology to better understand visual content. Inside every image lies hundreds of unique data points that provide priceless information about your audience and their revealed visual preferences. Find out how technology can unearth this data to help you make smarter creative decisions and improve your visual strategy.   ©Robert Wright/LDV Vision Summit

“I believe all CV/ML/AI researchers should attend the LDV Vision summit and its competitions. LDV Vision Summit is a unique event. Different from traditional conferences where I mostly only meet researchers, at LDV Vision Summit, I made a lot of good contacts from the whole computer vision technology ecosystem, lots of CV researchers, investors, entrepreneurs, and VR/AR content creators. I was thrilled to participate.” Shuai Kyle Zheng, University of Oxford, Graduate Research Assistant

"The last 10 years of advances in mobile and cloud computing have been life changing. Visual computing feels like that next life changing tech movement, and the questions and ideas explored at the LDV Vision Summit will be critical to any tech player serious about being involved." Rohit Dave, Samsung, Corporate Development & Strategy
 

“My first experience at the LDV Vision Summit was a slam dunk:  true thought leadership content, a never-boring pace, and a great platform to drive awareness of our technology.  I was also very impressed by the caliber of speakers as well as the attendees.” Travis Merrill, FLIR Systems, SVP, CMO

Truly inspired by inspired by the LDV Vision Summit and Evan Nisselson’s dedication to creating an exceptional audience experience. I was fortunate to join this year as a judge reviewing the selected start-ups and felt confident that they were well-vetted ahead of time. Hats off to the entire team for bringing this event to life — a very important annual gathering for those in the VC/Tech space.” Susan McPherson, McPherson Strategies, CEO

“As a first time attendee to the LDV summit, I was mostly expecting to see numerous technical presentations by various startups from the computer vision field.  I was pleasantly surprised that LDV Summit also covered important investment related topics and trends that helped me gain greater understanding of business aspects related to Visual Tech.  I am definitely looking forward to LDV 2017!” Jack Levin, Nventify, CEO & Founder, ImageShack, CEO & Co-Founder

“I enjoyed the unique atmosphere and fresh perspectives that come from bringing together vision researchers, entrepreneurs, and investors. I can't get that within the usual academic conference. Computer vision specialists who want to make an impact should make every effort to participate in this lively summit.” Derek Hoiem, University of Illinois at Urbana-Champaign, Associate Professor, Computer Science

Day 1 Keynote:  reCAPTCHA: anti-spam, crowdsourcing, and humanity.   reCAPTCHA was created 9 years ago as an anti-spam tool which also crowdsourced books digitization. reCAPTCHA has been pushing the boundary of research on OCR that today machines can read text much better than human. With the re-imagined ""No CAPTCHA"" reCAPTCHA, it pivots into the natural image recognition space and empowers deep learning system from millions of brilliant human minds everyday. Ying Liu, Google, reCAPTCHA, Manager @Robert Wright/LDV Vision Summit

Day 1 Keynote:  reCAPTCHA: anti-spam, crowdsourcing, and humanity.  
reCAPTCHA was created 9 years ago as an anti-spam tool which also crowdsourced books digitization. reCAPTCHA has been pushing the boundary of research on OCR that today machines can read text much better than human. With the re-imagined ""No CAPTCHA"" reCAPTCHA, it pivots into the natural image recognition space and empowers deep learning system from millions of brilliant human minds everyday. Ying Liu, Google, reCAPTCHA, Manager @Robert Wright/LDV Vision Summit

"The LDV Vision Summit was the most rewarding event we have attended since starting GrokStyle, by a large margin. The talks were interesting and varied, and it was inspiring to see the kinds of real-world applications of computer vision that you might not see at academic conferences like CVPR. We are grateful for the opportunity to present our startup, and our presentation allowed us to connect with many new potential customers, investors, business partners, and collaborators.  It’s given us a running start to take our technology out of the lab and into the world." Sean Bell, GrokStyle, Co-Founder and CEO [Winner of the LDV Vision Summit Computer Vision Challenge]

“It’s amazing to see all the innovation around creativity at the Summit,” said Brian Hunt, EVP Head of Believe Studios & Development. “I think our panel spurred a valuable conversation around the importance of not losing sight of the storytelling amidst the chase to keep up with it all."

“For computer vision entrepreneurs and investors, the LDV Vision Summit is not to be missed. There simply is no other place where you can find such high caliber ideas being explored, and high impact people discussing them.” Samson Timoner, Founder/CTO Mythical Labs.

“LDV brings together thinkers from many fields, which is key for productive, cross-disciplinary discussion about the present and future of imaging. As a professor who approaches photography from a humanities perspective, I found the talks and panels, from experts in fields beyond my own, illuminating. This is an excellent space for the generation of important conversations for anyone (photographer, academic, investor, branding expert, artist, software developer…) interested in the role of the image.” Lauren Walsh, New York University, Professor & Director NYU Gallatin Summer Photojournalism Lab.

Day 2 Panel: Content Creation For Virtual Reality Is Critical For Success - How Will Pros and Consumers Create 360, 3D, & Virtual Reality Content? Moderator: Jessi Hempel, Wired, Senior Writer. Panelists:  Jason Rosenthal, Lytro, CEO, Brian Cabral, Facebook, Dir. Engineering, 360 Camera, Koji Gardiner, Jaunt VR, VP of Engineering ©Dean Meyers/Vizworld

Day 2 Panel: Content Creation For Virtual Reality Is Critical For Success - How Will Pros and Consumers Create 360, 3D, & Virtual Reality Content? Moderator: Jessi Hempel, Wired, Senior Writer. Panelists:  Jason Rosenthal, Lytro, CEO, Brian Cabral, Facebook, Dir. Engineering, 360 Camera, Koji Gardiner, Jaunt VR, VP of Engineering ©Dean Meyers/Vizworld

"Oh, I heard about your company" - presenting on stage in the startup competition gives you prior visibility among investors, whom you want to get connected to. It also brings some inbound meeting requests from Venture Capitalists." Anton Yakubenko, GeoCV, Co-Founder & CEO

"The LDV Summit is truly a unique synergy of smart business, cutting edge research, and inspiring creativity. The LDV Summit provided a platform and focused audience that can't be found anywhere else. Through the pitch competition we were instantly connected with several very interested investors. " James George, Simile, Co-Founder

“It was definitely one of the best learning and networking experience.  I love listening to all the rapidly disseminated information.  Also loved feedback from people on my thoughts.  So many people came and provided me positive feedback on Context and intent are essential for finding meaning in photos; and language divides but visual unites.” Ramesh Jain, Professor at UCI and CoFounder at Krumbs  

Future of how humans will interact with their phones in a world of 360 photos during Evan Nisselson's Day 2 keynote. ©Robert Wright/LDV Vision Summit

Future of how humans will interact with their phones in a world of 360 photos during Evan Nisselson's Day 2 keynote.
©Robert Wright/LDV Vision Summit

“The LDV Vision Summit bests itself each year, and 2016 was no exception: the panels, talks and competitions are absolutely world class. I used to say that entrepreneurial CTOs, regardless of company size or focus, *should* attend. I’m now telling them it is absolutely essential!” Andy Parsons, Kontor, CTO & Co-Founder
 

“Photographs and images enable us to imagine ourselves in another place, another time and as if we were another person. They enable us to empathize with others across time and space. What images do for people symbolically, the LDV Vision Summit did practically for its attendees who came across space and time (zones) to attend. We were brought up close to new problems, solutions, burgeoning fields and concepts. We were able to empathize with founders up close, learn about new emerging technologies and solutions to today's and tomorrow's biggest problems.” Dan Schiffman, Tvision Insights, Co-Founder & CRO

“The LDV Vision Summit is a must for anyone interested in images, video or computer vision.  This event is unique in bringing together scientists, investors, entrepreneurs and artists all willing to share knowledge and ideas in a stimulating and dynamic atmosphere. The discussions are excellent. The connections made invaluable.” Thomas Jelonek, envision.ai, Founder

“The LDV Vision Summit is a fast-paced mix of technology, business, and academic perspectives.  You can hear insights from seasoned venture capitalists, major industry players, and young entrepreneurs developing their first vision or machine learning-based startups, packed into just two days.  It's invigorating!” Dr. David S. Touretzky,  Carnegie Mellon University, Research Professor Computer Science Department & Center for the Neural Basis of Cognition

“The LDV Vision summit is an opportunity to listen, learn and share ideas that are impacting everyone connected to the world of visuals. As photographers we too often wait to see what ideas and tools are coming down the pipeline that will have impact on the way we work. I prefer learning about potential visual technologies in advance and speaking with technologists during the process of creating them to hopefully have a beneficial impact for all. I hope other photographers will join our discussion at the next Summit.” Ron Haviv, VII Photo, Photographer & Co-Founder

Day 1 Panel: Autonomous Driving Would Not Be Possible Without Leveraging Visual Technologies.  Moderator: Mike Murphy, Quartz, Reporter. Panelists: Sanjiv Nanda, Qualcomm Research, VP, Engineering and Laszlo Kishonti, AdasWorks, CEO. ©Dean Myers/Vizworld

Day 1 Panel: Autonomous Driving Would Not Be Possible Without Leveraging Visual Technologies. 
Moderator: Mike Murphy, Quartz, Reporter. Panelists: Sanjiv Nanda, Qualcomm Research, VP, Engineering and Laszlo Kishonti, AdasWorks, CEO. ©Dean Myers/Vizworld

“In Silicon Valley - Augmented & Virtual Reality & Robotics are all hottest trends. The annual LDV Vision Summit in NYC always gathers the top-tier experts, brains and the ideas which drive these trends. It showed how early we are and simultaneously that we are embarking on technology paradigm shift. Excited for Keepy to leverage these technologies to save memories on any device and to collaborate with the people I met at this unique gathering of inspiring experts. I truly enjoy this conference each year.” Offir Gutelzon, Keepy, CEO and Co-Founder

“The summit gave us insight on emerging technologies that will be valuable for our network of photographers to anticipate as tools for their work in the near and distant future!” Susan Meiselas and Emma Raynes, Magnum Foundation

“Being surrounded with computer vision experts from academia, industry and startups was a great networking and learning experience. It was a very surprising moment for me to listen and meet such a great people at one place. I have made plenty of connections and learnt different perspectives about future of Computer Vision & AI. I am sure LDV Vision Summit will attract larger crowd in future and makes huge impact to humanity by bringing 2 different worlds (Academia & Industry) on to the same platform. I will try to be part of future LDV summits and contribute to them in some way.” Harsha Vardhan, Carnegie Mellon University, Graduate Student

As an early stage investor in robotics, computer vision, autonomous mobility, and remote sensing,The LDV Summit is an invaluable resource to have in our venture community.  Reflecting on last week’s presentations, I am still impressed by the curated experience of industry leaders, innovative startups and quality connections all under one roof.” Oliver Mitchell, Mach 5 Ventures

“I wanted to let you know your conference is hands down the best conference in NYC. I was absolutely blown away by the caliber of people you have assembled at every vertical of the ecosystem.” Jonathan Ohliger, CEO at VeePiO

“Concealed behind all the incredible people you meet, everything you learn and the amazing discoveries is the formidable inspiration that empowers you for many days after the event is over. The LDV Vision Summit only gets better with each edition.” Paul Melcher, Kaptur Magazine, Founder & Editor

"Being involved in the LDV Vision summit was a pleasure and an amazing opportunity to network across all disciplines of entrepreneurial computer vision. Big congrats to this years winners and everyone else for a great meeting!" 
Oscar Beijbom, UC Berkeley, Postdoctoral Scholar

"Easily the best curated speakers for any conference I have been to in awhile and great coverage of computer vision/machine learning. The startup pitches were high level. The doodle notes on screen were unique and enjoyable to watch." Nisa Amoils, Investor and Co-Chair Frontier Tech, New York Angels

“LDV Vision Summit assembled a fantastically eclectic group of people from investors to inventors who are all passionate about the how computer vision will change the world.  I look forward to going again in 2017!” Paul Kruszewski, wrnch, CEO & Founder

"As a first-timer at the LDV Vision Summit, one never knows what to expect... will it be too academic, too commercial, to many early stage start-ups, or over-run with larger tech companies?  Instead, the LDV team has curated the perfect blend of people and topics - all tackling disparate, but related applications leveraging computer vision.  In one quick trip, I engaged with potential hires, strategic partners, future investors - and learned about myriad new applications being tackled by this robust ecosystem!  Definitely planning on coming back next year." Richard Lee, Netra, CEO & Co-Founder

Day 2 Keynote: How funny videos can help build cultural understanding. Trina DasGupta, Single Palm Tree Productions, CEO ©Robert Wright/LDV Vision Summit

Day 2 Keynote: How funny videos can help build cultural understanding. Trina DasGupta, Single Palm Tree Productions, CEO ©Robert Wright/LDV Vision Summit

“The Summit's clear technology focus sets it apart - I was truly impressed by the quality of companies using computer vision across a range of verticals, from VR to robotics, and by the caliber of conversation amongst speakers.” Christina Bechhold, Samsung Global Innovation Center, Investor. Empire Angels, Co-Founder, Managing Director.  

"LDV Vision Summit is a unique event focused on visual technologies with a rare mix of great entrepreneurs, researchers, investors and global technology companies. Definitely worth the time and the trip to NYC.” Jan Erik Solem, Mapillary, CEO & Co-Founder

“LDV brought together entrepreneurs, practitioners and investors for a multidisciplinary discussion about where the future of the visual image is heading. It was a great mix of people who are working inwhat lies ahead for visual media. It is clear we are only at the beginning of what is possible. It was exciting to be part of it.” Doreen Lorenz, Vidlet, Co-Founder

“I wanted to let you know your conference is hands down the best conference in NYC. I was absolutely blown away by the caliber of people you have assembled at every vertical of the ecosystem.” Jonathan Ohliger, VeePiO, CEO

“The LDV Summit is always inspiring – for those who are creating, those who are operating, those who are investing – and for all of us who are constantly learning. For those of us obsessed with video, the ongoing discussions about live streaming were particularly great. Thanks to all who participated and asked great questions!” Rebecca Paoletti, CakeWorks, CEO & Co-Founder

Day 1 Keynote: A Visual Stepping Stone to Artificial Intelligence. What do the recent advances in computer vision mean for AI? Computer vision and AI are intertwined, yet insights gained in one may not be applicable to the other. The future of AI research depends on identifying these differences and finding new and creative solutions. Larry Zitnick, Facebook, Artificial Intelligence Research Lead. ©Robert Wright/LDV Vision Summit

Day 1 Keynote: A Visual Stepping Stone to Artificial Intelligence. What do the recent advances in computer vision mean for AI? Computer vision and AI are intertwined, yet insights gained in one may not be applicable to the other. The future of AI research depends on identifying these differences and finding new and creative solutions. Larry Zitnick, Facebook, Artificial Intelligence Research Lead. ©Robert Wright/LDV Vision Summit

“Real nice mix of very technical solutions with real-world application. It’s nice to see the results of research applied on a real market and how those solutions have to adapt to this environment” Brunno Attore, CTO, Brohan

“The LDV Vision Summit is at the forefront of what's changing in the image space, bringing founders and CEOs of image-related companies, academic researchers, technology developers, brands and cultural creators, and even photographers and visual artists together in one space. It's a great place to learn about what's happening in visual technology and meet the people that are defining the space.” Taylor Davidson, Unstructured Ventures, Managing Director

"IDA Ireland were thrilled to be part of such a fantastic event. It was a real privilege to spend two days with some of the smartest people in Computer Vision, Artificial Intelligence, Deep Learning, and Augmented Reality. The summit provided invaluable insights into what is happening at the forefront of vision technology" Jessica Benson, IDA Ireland, @IDAIRELAND

The LDV Vision Summit program and quality of attendees never fail to exceed expectations. Evan, Rebecca and Serge do a masterful job of curating a kaleidoscope of speakers, topics and sponsors and it works. Please set the date for 2017 so I can save it! Myron Kassaraba, Managing Director, MJK Partners, LLC

“We are three CS students, just graduating from Cornell Tech, spinning out our CV-driven video ad tech company. We are actively raising seed, actively trying to build partnerships with video platforms, and actively recruiting CV talent. So, for us, LDV was a hydra-headed home run. We met dozens of potential investors (and got a sense of who else to target). We connected with a few video player platforms and already set up a meeting with one. Lastly, we left with a fistful of solid recruiting leads.Next year, I'll have major FOMO if I can’t make LDV. It felt, for those two days, that we were at the center of the vision world. Even if we're not raising money, I feel like we'll have to be here to scope out the competition and see what’s on the horizon for CV.”   Bill Marino, CEO, Brohan
 

Day 2 Panel:  Is OTT the New Black? Monetizing digital video has confounded creators and programmers, with syndication leading the list of potential levers to pull. With the promise of OTT, suddenly new revenue streams can be unlocked, new audiences tapped, and money can flow. Right? But how easy is it, after all… [L-R] Moderator: Rebecca Paoletti, CakeWorks, CEO. Panelists:Ed Laczynski, Zype, CEO, Patricia Hadden, NBCUniversal, SVP, Digital Enterprises, Steve Davis, Ooyala, VP & GM East. ©Robert Wright/LDV Vision Summit

Day 2 Panel:  Is OTT the New Black? Monetizing digital video has confounded creators and programmers, with syndication leading the list of potential levers to pull. With the promise of OTT, suddenly new revenue streams can be unlocked, new audiences tapped, and money can flow. Right? But how easy is it, after all… [L-R] Moderator: Rebecca Paoletti, CakeWorks, CEO. Panelists:Ed Laczynski, Zype, CEO, Patricia Hadden, NBCUniversal, SVP, Digital Enterprises, Steve Davis, Ooyala, VP & GM East. ©Robert Wright/LDV Vision Summit

Learn more about our partners and sponsors:

AWS Activate Amazon Web Services provides startups with low cost, easy to use infrastructure needed to scale and grow any size business. Some of the world’s hottest startups including Pinterest, Instagram, and Dropbox have leveraged the power of AWS to easily get started and quickly scale.  

CakeWorks is a boutique digital video agency that launches and accelerates high-growth media businesses. Stay in the know with our weekly video insider newsletter. #videoiscake

Cornell Tech is a revolutionary model for graduate education that fuses technology with business and creative thinking. Cornell Tech brings together like-minded faculty, business leaders, tech entrepreneurs and students in a catalytic environment to produce visionary ideas grounded in significant needs that will reinvent the way we live.

Day 1 Fireside Chat: Bijan Sabet, Spark Capital & Evan Nisselson, LDV Capital. ©Robert Wright/LDV Vision Summit

Day 1 Fireside Chat: Bijan Sabet, Spark Capital & Evan Nisselson, LDV Capital. ©Robert Wright/LDV Vision Summit

Research at Facebook Our mission is to give people the power to share and make the world more open and connected. At Facebook, research permeates everything we do. We believe the most interesting research questions are derived from real world problems. Working on cutting edge research with a practical focus, we push product boundaries every day. At the same time, we are publishing papers, giving talks, attending and hosting conferences, and collaborating with the academic community.

FLIR designs, develops, manufactures, markets, and distributes technologies that enhance perception and awareness. We bring innovative sensing solutions into daily life through our thermal imaging systems, visible-light imaging systems, locator systems, measurement and diagnostic systems, and advanced threat detection systems. Our products improve the way people interact with the world around them, enhance public safety and well-being, increase energy efficiency, and protect the environment.

GumGum is a leading computer vision company with a mission to unlock the value of every online image for marketers. Its patented image-recognition technology delivers highly visible advertising campaigns to more than 400 million users as they view pictures and content across more than 2,000 premium publishers.

The International Center of Photography is the world’s leading institution dedicated to the practice and understanding of photography and the reproduced image in all its forms. Since its founding in 1974, ICP has presented more than 700 exhibitions and offered thousands of classes, providing instruction at every level.

Day 1 Keynote: Design Patterns for Evolving Storytelling Through Virtual and Mixed Reality Technologies.  Heather Raikes has a PhD in Digital Arts and Experimental Media and is currently Creative Director at Seattle-based virtual and mixed reality development studio 8ninths. She will discuss archetypes that underscore the fundamentals of storytelling and emerging design patterns that can be applied to virtual and mixed reality technologies. Heather Raikes, 8ninths, Creative Director, Augmented | Virtual Reality ©Robert Wright/LDV Vision Summit

Day 1 Keynote: Design Patterns for Evolving Storytelling Through Virtual and Mixed Reality Technologies. 
Heather Raikes has a PhD in Digital Arts and Experimental Media and is currently Creative Director at Seattle-based virtual and mixed reality development studio 8ninths. She will discuss archetypes that underscore the fundamentals of storytelling and emerging design patterns that can be applied to virtual and mixed reality technologies. Heather Raikes, 8ninths, Creative Director, Augmented | Virtual Reality ©Robert Wright/LDV Vision Summit

IDA Ireland is Ireland's inward promotion agency, we partner with international companies, working with them every step of the way to achieve a smooth, fast and successful set-up of their operations in Ireland. Ireland is one of the best places in the world to do business for large multinationals and high-growth companies.  

Kaptur is the first magazine about the photo tech space. News, research and stats along with commentaries, industry reports and deep analysis written by industry experts.

LDV Capital Investing in people around the world who are creating visual technology businesses with deep domain expertise.

Mapillary is a community-based photomapping service that covers more than just streets, providing real-time data for cities and governments at scale. With hundreds of thousands of new photos every day, Mapillary can connect images to create an immersive ground-level view of the world for users to virtually explore and to document change over time.

Microsoft Research has contributed to nearly every product Microsoft has shipped, including Kinect for Xbox, Cortana, cool free photography apps like Hyperlapse, and other programs that help secure your data in the cloud. We have world-renowned scientists at the forefront of machine learning, computer vision, speech, and artificial intelligence. Our external collaborations include efforts to prevent disease outbreaks and solve problems facing large cities such as traffic and pollution.

The MFA Photography, Video and Related Media Department at the School of Visual Arts is the premiere program for the study of Lens and Screen Arts. This program champions multimedia integration, interdisciplinary activity, and provides ever-expanding opportunities for lens-based students.

Qualcomm Research is a world-class, global R&D organization comprised forward-thinking researchers that engage in a wide variety of exciting and technically challenging areas of research. Each focus area pushes the envelope of what is possible in mobile technology, paving the way for the devices, applications, services, and business models of tomorrow. We are leading the way in next-generation wireless technologies, including 5G. For more than 25 years, our ideas and inventions have driven the evolution of digital communications, linking people everywhere more closely to information, entertainment, and each other.   

Day 1 Panel: Where will Computer Vision Be in 5, 10 & 20 years? There is exponential growth recently with businesses leveraging computer vision. Dozens of computer vision companies have recently been acquired by Google, Microsoft, Yahoo, Apple, Facebook, Salesforce, GoPro, Twitter and other major players. How will image recognition disrupt businesses and empower humanity? How can we inspire more researchers to bring their visions to market? Could computer vision become a commodity? If yes, when? Moderator: Taylor Davidson, Unstructured Ventures, Managing Director Panelists: Ramesh Jain (UCI, Prof., Computer Vision), Serge Belongie (Cornell Tech, Prof., Computer Vision), Stacey Svetlichnaya (Flickr, Software Developer, Vision & Machine Learning), Nikhil Rasiwasia (Snapdeal, Principal Scientist)

Day 1 Panel: Where will Computer Vision Be in 5, 10 & 20 years?
There is exponential growth recently with businesses leveraging computer vision. Dozens of computer vision companies have recently been acquired by Google, Microsoft, Yahoo, Apple, Facebook, Salesforce, GoPro, Twitter and other major players. How will image recognition disrupt businesses and empower humanity? How can we inspire more researchers to bring their visions to market? Could computer vision become a commodity? If yes, when? Moderator: Taylor Davidson, Unstructured Ventures, Managing Director
Panelists: Ramesh Jain (UCI, Prof., Computer Vision), Serge Belongie (Cornell Tech, Prof., Computer Vision), Stacey Svetlichnaya (Flickr, Software Developer, Vision & Machine Learning), Nikhil Rasiwasia (Snapdeal, Principal Scientist)

Vidlet taps the power of mobile video for business. The company’s mobile-first B2B video platform makes it easy for large enterprises to use mobile video for a wide range of business communications, from conducting market research for innovative new products, to training a world-class workforce and employee engagement, to surfacing valuable insights from troves of video.

VizWorld covers news and the community about visual thinking, from innovation and design theory to applied visual thinking in technology, media and education. From the whiteboard to the latest OLED screens, graphic recording to movie making and VFX, VizWorld readers want to know how to put visual thinking to work and play. SHOW US your story!

AliKat Productions is a New York-based event management and marketing company: a one-stop shop for all event, marketing and promotional needs. We plan and execute high-profile, stylized, local, national and international events, specializing in unique, targeted solutions that are highly successful and sustainable. #AliKatProd

Robert Wright Photography Clients include Bloomberg Markets, Budget Travel, Elle, Details, Entrepreneur, ESPN The Magazine, Fast Company, Fortune, Glamour, Inc. Men's Journal, Newsweek (the old one), Outside, People, New York Magazine, New York Times, Self, Stern, T&L, Time, W, Wall Street Journal and more…

Hybrid Events Group is the only real-time solution for turning conferences and meetings into web-ready videos. Using our proprietary CaptureProTM System we digitally capture your presentation materials, combine them with HD video of your presenters and then edit everything live, right there, during your event. You’ll have your finished videos before you leave the venue.

We are a family affair! Serge and August Belongie thanking the audience for a fantastic and inspirational gathering. See you next year! #carpediem ©Robert Wright/LDV Vision Summit

We are a family affair! Serge and August Belongie thanking the audience for a fantastic and inspirational gathering. See you next year! #carpediem ©Robert Wright/LDV Vision Summit

We're Taking Billions Of Photos A Day—Let's Use Them To Improve Our World!

Join us at the next LDV Vision Summit.
This keynote is from our 2015 LDV Vision Summit from our LDV Vision Book 2015

Pete Warden, Engineer, Google

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

My day job is working as a research engineer for the Google Brain team on some of this deep learning vision stuff. But, what I'm going to talk about today is actually trying to find interesting, offbeat, weird, non-commercial applications of this vision technology, and why I think it's really important as a community that we branch out to some weird and wonderful products and nonprofit-type stuff.

Why do I think we need to do this? Computer vision has some really deep fundamental problems, I think, the way that it's set up at the moment.

The number one problem is that it doesn't actually work. I don't want to pick on Microsoft because their How-Old demo was amazing. As a researcher and as somebody who's worked in vision for years, it's amazing we can do the things we do. But if you look at the coverage from the general public, they're just confused and bewildered about the mistakes that it makes. I could have picked any recognition or any vision technology. If you look at the general public's reaction to what we're doing, they're just left scratching their heads. That just shows what a massive gap in expectations there is between what we're doing as researchers and engineers and what the general public actually expects.

What we know is that computer vision, the way we measure it, is actually starting to kind of, sort of, mostly work now -- at least for a lot of the problems that we actually care about.

This is one of my favorite examples from the last few months, where Andrej Karpathy from Stanford actually tried to do the ImageNet object recognition challenge as a human, just to see how well humans could actually do at the task that we'd set the algorithms. He actually spent weeks training for this, doing manual training by looking through and trying to learn all the categories, and spent a long time on each image. Even at the end of that, he was only able to beat the best of the 2014 algorithms by a percentage point or two. His belief was that that lead was going to vanish shortly as the trajectory of the algorithm improvements just kept increasing.

It's pretty clear that, by our own measurements, we're doing really well. But nobody's impressed that a computer can tell them that a picture of a hot dog is a picture of a hot dog. That doesn't really get people excited. We really have not only a perception problem, when we're going out and talking to partners and talking to the general public and talking to people. The applications that do work tend to be around security and government, and they aren't particularly popular either. The reason this matters is not only do we have a perception problem, but we aren't actually getting the feedback that we need to get from working with real problems when we're doing this research.

What's the solution? This is a bit asinine. Of course we want to find practical applications that help people. What I'm going to be talking about for the rest of this is just trying to go through some of my experiences, trying to do something a little bit offbeat, a little bit different, and a little bit unusual with nonprofit-type stuff -- just so we've actually got some practical, concrete, useful examples of what I'm talking about.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

The first one I'm going to talk about is one that I did that didn't work at all. I'm going to use this as a cautionary tale of how not to approach a new problem that's trying to do something to help the world. I came into this with the idea that...I was working at my startup Jetpac. We had hundreds of millions of geotagged Instagram photos that were public that we were analyzing to build guides for hotels, restaurants, bars all over the world. We were able to do things like look at how many photos showed mustaches at a particular bar to give you an idea of how hipster that particular bar was. It actually worked quite well. It was a lot of fun, but I knew that there was really, really interesting and useful information to solve a bunch of other problems that actually mattered.

One of the things that I thought I knew was that pollution gives you really, really vivid sunsets. This was just something that I had embedded in my mind, and it seemed like it would be something that I should be able to pull out from the millions of sunset photos we had all over the world. I went through, I spent a bunch of time analyzing these, looking at public pollution data from cities all over the US, with the hope that I could actually build this sensor, just using this free, open, public data to estimate pollution and track pollution all over the world almost instantly. Unfortunately it didn't work at all. Not only didn't it work, I actually had worse sunsets when I was seeing more pollution.

image03.png

At that point, I did what I should have done at the start, and went back and actually looked at what the atmospheric scientists were saying about pollution and sunsets. It turns out it’s only at really high atmospheres, with very uniform particulate pollution -- which is what you typically get from volcanoes -- is what actually gives you vivid sunsets. Other kinds of pollution, as you might imagine if you've ever lived in LA, just gives you washed out, blurry, grungy sunsets. The lesson for me from this was I really should have been listening and driven by the people who actually understood the problem and knew the problem, rather than jumping in with my shiny understanding of technology, but not really understanding the domain at all.

image02.png

Next I want to talk about something that really did work, but I didn't do it. This is actually one of my favorite projects of the last couple of years. The team at onformative, they took a whole bunch of satellite photos and they ran face detectors across them. Hopefully you can see, there appears to be some kind of Jesus in a cornfield on the left hand side, and a very grumpy river delta on the right. I thought this was brilliant. This is really imaginative. This is really different. This is really joining together a data set with a completely different set of vision technologies and shows how far we've come with face recognition.

image05.png

But shortly after I saw this example, I actually ran across this news story about a landslide in Afghanistan that had killed over a thousand people. What was really heartbreaking about this was that the geologists looking at just the super low-res, not-very-recent satellite photos and the elevation data on Google Earth said that it was painfully, painfully clear that this landslide was actually going to be happening.

What I'm going to just finish up with here is that there's a whole bunch of other stuff that we really could be solving with this:

       - 70 other daily living activities

        - pollution

        - springtime

        - rare wildlife.

What I'm trying to do is actually just start a discussion mailing list here at Vision for Good, where we can bring together some people who are working on this vision stuff and the nonprofits who actually want to get some help. I'm really hoping you can join me there. No obligation, but I want to see what happens in this.

 

Join us at the next LDV Vision Summit.
This keynote is from our 2015 LDV Vision Summit from our LDV Vision Book 2015

How Can Data Science Evaluate Why Some Advertising Creative Content Resonates More Than Others?

Join us at the next LDV Vision Summit
This keynote is from our 2015 LDV Vision Summit from our LDV Vision Book 2015

Claudia Perlich, Chief Scientist, Dstillery

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

When Evan invited me to come and talk at a vision event, my initial response was: “I am not sure. I actually don't do vision. I typically don't even try to visualize my data.” But I thought about it and there's some very interesting development that's going on in digital advertising, in our company right now. I wanted to share the premise and maybe some of the promise of this work. It all starts with what moves us. Ultimately you would argue that the whole point of advertising is to affect people, to touch them emotionally in some way, maybe bring them closer or at least generate some interest in your product.

What I'm going to do is I'm going to show you a couple of images. And I'm asking you: What moves you?

This is a pretty well known campaign. I'm sure you have seen many of those before, maybe not all of them. I will not embarrass you and ask you to raise your hand on which of those you felt most touched or affected. But chances are that we all had very different reactions to these images. I am not going to tell you which one my favorite is,  but I wanted to take the opportunity to tell you a little bit of a story that few people know about me.

I grew up in East Germany and that in particular means -- and that's the irony of my life -- until age 15 I had never seen an ad because in East Germany there was no such thing as advertising. There was nothing really to sell anyways so why the hell would you want to advertise it? When the wall came down I became fascinated by ads. Not because of the product, but what I discovered was photography: beautiful images of things and nature. And where I discovered it was in cheap magazines that had incredibly well-printed and produced pictures (by my East German standards at least) . The dirty little secret is I spent my time as a teenager collecting those. It was truly because I was just incredibly amazed and touched by some of that photography. Not that I ever bought any of these things, but still.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Images have the ability to connect to us, to touch us. I want to challenge you right now: do you think you're able to express why a given image had such an impression on you? I'm sure you can tell me which one it was and I will recognize it, but do you even know why? I personally feel that we are very restricted by language when we try to explain things, when we annotate images, when we bucket them: "This is a happy cat and this is a grumpy cat." Ultimately, language is limiting in the ability to express our emotions.

Whether or not the tale is true, that northern tribes have an exceeding number of words to describe the various types of snow, just consider that the average person’s vocabulary is estimated to be only around 17K. While this may sound like a lot in a specific context, that is all there is for all the things we may ever wish to say. Anyone ever trying to describe the subtle details of an image will soon realize how limiting already our ability to characterize colors are. How many words for different colors can you come up with? I have seen a list of about 150 and they contained a lot of analogies like “forest green.” Do you want to guess how many colors your Internet browser uses? HTML allows for 16 million. That's one of the challenges that I feel we face in machine learning when we try to explain, characterize, categorize, and annotate things. We're stuck with language and in that process, a lot of the magic and subtlety gets lost. 

As I said, I work in advertising so I want to show you some of the alternatives where we're trying to avoid having to characterize what it is about an image that touches you. I don't actually have results for Equinox, so what I'm going to show you are results from a food brand. We ran an experiment where we show digital ads with a set of creatives that vary both in image and message. The interesting question is not so much “which of these variations is ‘best’” but rather to understand how different people react differently to these variations. The methodology is a combination of a random experiment alongside with machine learning. I will start initially looking at the impact of just different images.  

What is this graph here? It's showing the impact of the image as a factor for different groups of people. Unfortunately I cannot show you the exact six images, and now I have to eat my words and describe them a little but in terms of what they looked like: family oriented (probably a picture with the family), individual, lifestyle, just showing the logo, some variety of the product, or just a very close up shot of the product itself. And in order to give you a talk, I also have to describe the characteristics of different groups of people to you rather than the actual millions of details that our machine-learning approach is actually processing. Specifically, I have grouped people by where they physically go -- fast food restaurants in this first comparison and gyms in the next. (We obtain the information about device locations from mobile advertising bid requests.)

The first thing you observe is that people who go to Chick-fil-A are really, really hard to sway to buy this product. No matter what it is that you show them, they're kind of happy with where they are, thank you very much. The next observation is that the “lifestyle” image is sometimes having no effect at all whereas the product image was overall the most effective across groups.  

But you also see a lot of variance between these subgroups -- they react very, very differently to some of the differences in the images. Right now, this is just a very high-level picture characterizing people by the fast foods they like. But you see clear differentiation and you can think about how would you use that information to schedule or to choose separate images for sub-population if you wanted to actually specifically reach out to any of these groups.

image10.png

Let’s take a look at how the messages themselves fare. Some related more to the natural process of the production. Others talked more about taking a snack at a certain time of the day. Some tried to tell you that they're really healthy and good for you, talking about the benefit of this product. What you see here is now broken up by people who go to certain gyms.

In general here you see that there's overarching effect on the emphasis of benefit: it's good for me. I mean yes, people who go to the gym probably care about the benefit and this is very consistent. But what is fascinating to see are the implicit groupings of the gyms: Equinox and Crunch are similar, and so are YMCA and LA Fitness. The populations are similar to each other in how they are affected by the message of the creative. In the case of the YMCA and the LA Fitness, the descriptive message "This is a great product, it will taste perfect" is very effective. This is the sensory stimulation that is not just limited to taste but also includes texture and how it will make you feel beyond taste.

I wanted to use this analysis as an example to really challenge our industry as we move on. What's important here is, that I wasn't trying to characterize the images in any way, but just let a machine-learning algorithm estimate how different people are affected by the message and the image. If you think this forward, maybe in the future I actually don't need to ask you whether it was the guy carrying the little statue. I might be able to predict from observed data alone which of those creatives will most likely speak to you as an individual.

Join us at the next LDV Vision Summit.
This panel discussion is an excerpt from our LDV Vision Book 2015

The Power And Promise Of Emotion Aware Machines

Join us at the next LDV Vision Summit.
This keynote is an excerpt from our LDV Vision Book 2015

Marian Stewart Bartlett, Co-Founder & Lead Scientist, Emotient.
Apple acquired Emotient in January 2016 after she spoke at our 2015 LDV Vision Summit.

Technology that can measure emotion from the face will have broad impact across a range of industries. In my presentation today, I am going to provide a picture of what's possible with this technology today and also provide an indication of what's possible for the future, where the field may be going. But first I will show a brief demo of facial expression recognition.

You can see the system detecting my face and then when I smile, the face box changes blue to indicate joy. On the right, we see the outputs over time. Okay, so that's over the past 30 seconds or so. Next, I will show sadness.

image06.png

That was a pronounced expression of sadness. Here is a subtle sad. Natural facial behavior is sometimes subtle but other times, it's not necessarily subtle. Sometimes, it's fast. These are called micro expressions. These are expressions that flash on and off your face very, very quickly. Sometimes in just one frame of video.

I will show some fast expressions, some fast joy. Then, also surprise. Fast surprise. Anger. Fear. Disgust. Now, disgust is an important emotion because when somebody dislikes something, they'll often contract this muscle here, the levator muscle, without realizing they are doing it. Like this. And then there is contempt. Contempt means unimpressed.

Some things that are possible today are to do pattern recognition on the time courses of this signal. If we take a strong pattern recognition algorithm, capturing some of the temporal dynamics, we are able to detect some things sometimes better than human judges can. For example, we've demonstrated the ability to detect faked pain and distinguish it from real pain and we can do that better than human judges. Other things that we can detect are depression, student engagement, and my colleagues have also demonstrated that we're able to predict economic decisions. I will tell you a little bit more about that decision study.

Facial expressions provide a window into our decisions. The reason for that is that the emotional parts of our brain and particularly the amygdala, which is part of the limbic system, plays a huge role in decision making. It's responsible for that fast, gut response, that fast value assessment that drives a lot of the decisions that we end up making. One of the other co-founders of Emotient, Ian Fasel, collaborated with one of the leaders in neuroeconomics, Alan Sanfey, in order to ask whether they could predict decisions in economic gains from facial expression.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Here is the game. It's the ultimatum game. In this game, Player One is given some money and then Player One has to offer some of the money to Player Two. They can offer none, all, or anything in between. If Player Two accepts, then both get the money. But if Player Two rejects, then neither one gets any money.

The optimal solution, according to most economic theories, is to always accept because you will get more money if you always say yes. However, humans don't always behave optimally in this sense. They get mad. This guy is a jerk. I am going to punish him and reject his offer and nobody is getting any money. Rossi, Fasel, and Sanfey asked whether they could predict decisions in this game. They used our system to measure individual facial muscle movements and then they gathered dynamic features of these facial muscle movements. They passed these features to a gentle boost classifier trained to detect whether the player would accept or reject the offer.

They also compared the machine learning system to human judges looking at the same videos. What they found is that the human judges were at chance. They could not detect who was going to reject the offer. However, the machine learning system was able to do this above chance. It was 73% accurate. It was able to predict decisions in this game. They could also find out which signals were contributing to the decision, being able to detect the rejection in this offer. What they found was that facial signals of disgust were associated with bad offers, but they didn't necessarily predict rejection. What predicted rejection was facial expressions of anger. They could secondly ask which temple frequencies contain the most information for this discrimination. That is where they found that the discriminative signals were fast facial expressions. These were facial movements that were on the order of about half a second cycle. On the other hand, they found that humans were basing their decisions on much longer time scales. The way they did that, was they trained a second general boost classifier. But this time they trained it to try to predict the observer guesses. Then they went back and looked at which features were being selected. The observer guesses were being driven by facial signals on time scales that were too long.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

There are many commercial applications of facial expression technology. Some of you may remember the Sony Smile Shutter. The Smile Shutter detects smiles in the camera image and that was based on our technology back in UCSD prior to forming the company. That was probably one of the first commercial applications of facial expression technology. What I've shown here on the screen is one of the more prominent applications at this time. This is an ad test focus group and here the system is detecting multiple faces at once and is also summarizing the results into some key performance indicators: attention, engagement, and sentiment.

image08.png

Now where this is moving in the future, is that we're moving towards facial expression in the wild. We're moving towards recognition of sentiment out in natural context where people are naturally interacting with their content. Deep learning has contributed significantly to this because it has helped provide robustness to factors such as head pose and lighting to enable us to operate in the wild. This shows some of the improvement that we got when we moved to a deep learning architecture. Blue shows our robustness to pose prior to deep learning and then green shows the boost that we got when we changed over to deep learning with an equivalent data set.

image13.png

Here is an example of media testing in a natural context. What we have is people watching the Super Bowl halftime in a bar. Watch the man in green. He shows some nice facial expressions in just a moment.

Next, we have the system aimed at a hundred people at once during a basketball game. Here we are gathering crowd analytics and getting aggregate information almost instantly and it's also anonymous because the original video can be discarded and we only need to keep the facial expression data. Here we have a crowd responding to a sponsored moment at a particular basketball game.

There are also a number of applications of this technology in medicine. The system is able to detect depression and it can be employed as a screening mechanism during tele-medicine interviews, for example. It can track your response over time, your improvement over time, and also quantify your response to treatment.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Another area where it can contribute in medicine is pain. We can measure pain from the face. It's well known that pain is under-treated in hospitals today and we have an ongoing collaboration with Rady Children's Hospital where we have demonstrated that we can measure pain in the face postoperatively right in the hospital room. Now this contributes both to patient comfort but also to costs because under-treated pain leads to longer hospital stays and greater re-admission rates.

Education is another area where this technology will have broad impact. This image shows three facial behaviors related to learning.

image05.png

The girl in the middle is distressed. The one on the left is engaged in her task and the one on the right is moving away. These are behaviors that can be detected right now with this technology. We can also take this a step further and we can make online education and adaptive tutoring systems adapt to the emotional state of the student the way good teachers do.

In summary, facial expression technology is enabling us to measure sentiment in locations and scales that were previously not possible. It has the potential to predict consumer decisions and behavior and will have broad impact across a large range of fields. I showed you some in advertising, ad copy testing, medicine, and education. It will be a game changer.

©VizWorld

©VizWorld

Join us at the next LDV Vision Summit.
This panel discussion is an excerpt from our LDV Vision Book 2015

 

Faces Are The Key To Success For Social Platforms

Josh Elman is a partner at Greylock Partners. He has extensive operational experience working at social networks & platforms such as Linkedin, Twitter & Facebook.

We are honored that he will be one of ~80 expert speakers at our next LDV Vision Summit May 24 & 25 in NYC. We are starting our fireside chat with Josh virtually and hope you join us at our summit next month to hear the extended live version.

Evan: What is your favorite camera today and why? What do you think will be your favorite camera in 20 years?

Josh: My favorite camera today is my iPhone. It’s my favorite because it’s the one that’s with me all the time. It’s in my hands a lot (too often?!) so whenever I come across something new or a moment I want to remember, I just open the camera and take it. Sometimes I share pictures on Facebook, Twitter, Snapchat, Instagram, etc but most of the time I just take them for me. My camera roll is full of random moments of my days – mostly amazing memories.

In 20 years I think my favorite camera will still be the one that’s with me all the time. I’m guessing it will be built into my glasses. Wouldn’t it be cool if all I had to do was think about saving a moment and it's automatically saved? Or have all moments be recalled with just a thought? Or will there be flying cameras around us at all times capturing ourselves in the activity instead of only from our vantage point? That would be cool too.

Evan: You had extensive work experience at social networks & platforms such as Linkedin, Twitter & Facebook. As a partner at Greylock, you have invested in networks and platforms such as Medium, Meerkat, and WhoSay. They all leverage visual content. What are the most valuable attributes of visual content which exponentially drive network effects?

Josh: What people love about social platforms is that most of the content they read, interact with, and share is personal. It’s intimate. It’s authentically written and shared by another person.

Faces are the key to social platforms – whether it’s looking at pictures that someone else took of you (very popular in early Facebook) or the now ubiquitous selfie. Profile pictures and avatars frame every single post on Facebook, Twitter and more – and in eye tests, we’ve seen that users linger first over the face/avatar before reading the content.

LinkedIn took many years to add photos to profiles – for exactly this reason – in a professional context, they were worried that faces would color how someone perceived the profile.

Evan: You wrote a great Medium article looking back at 2015 and forward to 2016. You believe that the days of Geocities and Myspace were more emotionally expressive and that we have lost some of that online. What might be some examples of more expressive activities that you would like added to your daily life today and tomorrow?

Josh: In the real world, we express ourselves all the time – by the clothes we wear, by what we carry, jewelry, shoes, and more. We decorate our personal spaces in the same way – colors of paint, style of furniture, art and posters on the wall, accessories everywhere.

In the days of Geocities and MySpace, everyone who participated in those platforms had a space online they could decorate however they wanted. People did crazy things with backgrounds, fonts, and sounds. In today’s online social systems, we can differentiate ourselves by the content we share, but it’s very constraining – every profile looks nearly the same, you can customize your profile picture and maybe a banner.

When I used to go to someone’s MySpace page, I could learn so much about them just by the look of the page – whether it had ponies or goth skulls all over it. I’d love to see that return to our social platforms so you can get to know people much more visually and expressively.

Evan: You meet a ton of new entrepreneurs everyday but you only invest in a small percentage of the people you meet. What are the most important personality traits of entrepreneurs that you prefer investing in?

Josh: I meet so many entrepreneurs every week and month, and given how passionate everyone is, I often wish I could work with them all. What I get most excited by is when someone paints a vision of how they believe the world will work in a few years, and how they are building the products and services that will enable that.

The best visions are incredibly intoxicating. But beyond just having a great vision, I look for someone who is very pragmatic, and who understands how to break this down into just the first step, then maybe the next step, and the step after that to show progress towards that goal.

We often use the term “Learner” to describe the founders we most enjoy funding and working with. They are people who treat everything they do, and everyone they meet as an incredible learning opportunity to get more information to build their dreams faster.

Evan: What are you most looking forward to at our LDV Vision Summit?  

Josh: I’m very excited to meet all of the people thinking about computer vision, machine learning, and how these great innovations can be applied to make products that change people’s lives. It’s rare to see so many great people come together around one important topic like this.

Evan: I look forward to speaking with Josh and all of you in more detail during our fireside chat at our LDV Vision Summit in NYC on May 24 & 25 [50% discount off tickets until April 30]. We try to make our sessions very interactive and look forward to your questions.

Other expert speakers at our Summit are from Google, Refinery29, Facebook, Cornell Tech, Qualcomm, First Round, Lytro, Greylock Partners, Olapic, Quartz, Mapillary, Microsoft Research, CakeWorks, NBCUniversal, RRE Ventures, Magic Leap, Mine, Samsung, Enlighted, Flickr, IBM Watson, and many more….

IMG_8725_crop.jpg

Computer Vision Will Disrupt Many Businesses Especially Manufacturing and Consumer Shopping

Howard Morgan is a founding partner at First Round Capital and director at Idealab. He began his career as a professor of decision sciences at the Wharton School of the University of Pennsylvania and professor of computer science at the Moore School at the University of Pennsylvania from 1972 through 1985.

We are honored that he will be one of ~80 expert speakers at our next LDV Vision Summit May 24 & 25 in NYC. We are starting our fireside chat with Howard virtually and hope you join us at our summit next month to hear the extended live version.

Evan: First Round Capital “FRC” has invested in many visual technology businesses that leverage computer vision. In the next 10 years - which business sector will be the most disrupted by computer vision and why?

Howard: Both the manufacturing inspection area, which has had various types of visual tech over the years, and the consumer sectors, particularly shopping, will be disrupted. You will be able to shop either by asking Alexa for something, or showing it to her - or her equivalents with a camera.

Evan: There are many AdTech companies in FRC's portfolio. Several sessions at our LDV Vision Summit will cover how computer vision is empowering the advertising industry with user generated content, to tracking audience TV attention and increasing ROI for marketing campaigns. What intrigues you about how computer vision can impact this sector?
 

©Peter G.Weiner Photography

©Peter G.Weiner Photography

Howard: Our GumGum investment, along with our investment in Curalate, both make heavy of use of computer vision to determine and target users in the visual sectors.  People will be shown contextually relevant ads or additional information based on the pictures they’re looking at, or creating on the various social media platforms.  This will get much more specific and lead to directly shopping from images - something that’s been tried but has been hard to do well with mediocre image recognition.

Evan: FRC has a very unique approach to celebrating the New Year with your great annual holiday video. It seems like a successful video marketing initiative. How did that idea originate and what was the goal? After years of doing this holiday video - what is your advice for your companies that wish to create an annual video marketing initiative.

First Round Annual Holiday Video 2013 “What Does The Unicorn Say?”

First Round Annual Holiday Video 2013 “What Does The Unicorn Say?”

First Round Annual Holiday Video 2015 “This Time It’s Different”

First Round Annual Holiday Video 2015 “This Time It’s Different”

Howard: Josh and the team decided we wanted to have fun with our companies, and show the power of the First Round Network as one which not only has high performing companies, but also great fun. And it was our way to feature the companies, and not just our partners in our holiday greetings to the world.  Our advice to those who want to do an annual video marketing piece is to be creative - we have chosen the parody, but there are lots of other ways to be creative.

Evan: You frequently experience startup pitches. What is your one sentence advice to help entrepreneurs improve their odds for success.

Howard: Get really good at crisply telling your story - why the product is needed, and how you’re going to make money with it.

Evan: What are you most looking forward to at our LDV Vision Summit?

 

©Heather Sullivan

©Heather Sullivan

Howard: LDV Vision Summit has a great view of the future of vision and related technologies. I always want to be learning about the way early technologies that will impact us over the next decade, and LDV is where I hope to find some of that information.

Evan: I look forward to speaking with Howard and all of you in more detail during our fireside chat at our LDV Vision Summit in NYC on May 24 & 25 [50% discount off tickets until April 30]. We try to make our sessions very interactive and look forward to your questions.

Other expert speakers at our Summit are from Google, Refinery29, Facebook, Cornell Tech, Qualcomm, First Round, Lytro, Greylock Partners, Olapic, Quartz, Mapillary, Microsoft Research, CakeWorks, NBCUniversal, RRE Ventures, Magic Leap, Mine, Samsung, Enlighted, Flickr, IBM Watson, and many more….

 

 

Dynamic Images And Video Exponentially Increase ROI In Email Marketing

Vivek Sharma, CEO & Founder, Movable Ink ©Robert Wright/LDV Vision Summit

Vivek Sharma, CEO & Founder, Movable Ink ©Robert Wright/LDV Vision Summit

Join us at the next LDV Vision Summit.
This panel discussion is an excerpt from our LDV Vision Book 2015

Vivek Sharma, CEO & Founder, Movable Ink

Evan, you look exactly the same as when we sat next to each other at General Assembly except you had a full head of hair [laughter]. It’s funny, we moved into General Assembly around the same time, about four years ago, and I remember sitting diagonally across from Evan. e common thing we noticed about each other is we were a few years older than others in that group at GA. I was working on a marketing so ware company and Evan is a marketing genius. If you kind of do the Mad Men analogy, he’s the bald Don Draper to my Indian Roger Sterling. At Movable Ink, we’re a contextual marketing company that happens to do email really well and we’re headquartered here in New York. We think a lot about the power of imagery and the power of photos to compel people, to get them to engage, to get them to interact with you. Where we think marketing is headed, is it’s really about creating an experience over selling a product. Creating an experience that delivers on the promise of a brand. I want to talk a little bit more about how we see imagery and photography changing and being used in more creative ways within marketing.

Let me start with a survey right here. is is a campaign that Allen Edmonds, a clothing retailer, ran recently and there were two different types of creative that they wanted to test out. Creative A that you see on your left side is a nice fabric background. It’s an interesting copy. And creative B has a few more product shots in a different type of imagery.

How many of you think creative A performed better? Only five hands are showing. How many think creative B performed better? The vast majority of you. This is the kind of thing that happens in email marketing departments and people go by gut feel and decide, is this going to be effective or not? Well, most of you were wrong. As it turned out, in this particular case—and this could be completely different in the next campaign—people are voting with their engagement, with what they choose to view and read and click on. Something that was impossible for was that ability to test creatives on the y. is is a use of dynamic creative where you’re able to A, B, and multivariate test. While an email campaign is running, your audience is telling you what’s working really effectively and it’s switching that over to the winning forms of creative on the fly. These are types of things that happen in other areas in web marketing, but email marketing has been a little stagnant and it has been formerly impossible.

A huge lift in click-through and this is one example: images change in real-time. The analogy that I use: if you were to walk outside here and head over to Times Square, you’re surrounded by giant billboards and for the most part, everyone has seen the exact same message. Millions of people walk by and you’re seeing the same creative that’s being used. Contrast that to the experience of walking down Park Avenue into any boutique. e second you walk in, somebody notices you, they see you’re a little harried in the middle of day. You might be in there buying something for your boyfriend or girlfriend. They’re kind of sizing you up and figuring out how much you’re likely to spend, what kinds of products you’re interested in, peppering you with a few questions back and forth. So that’s a very different experience. That’s an opportunity where a real person is sitting across and sensing and responding to your context and understanding that and tailoring the message to you. Unfortunately, that’s very one-to-one and it’s been impossible to do that at a massive scale, especially when creative bottlenecks exist.

We know imagery works in marketing. Over and over, the statistics prove that even including an image in search results gets people to engage. It builds a trust. It gives people a sense of what they’re buying. Especially if you’re a consumer brand, if you sell any sort of physical product, if there’s a fit and finish and feel to it, today, the best way to communicate that is through imagery. Images are important, but as we’ve mentioned, for the most part in email marketing today, they’re very static. So there’s a challenge in bringing that dynamism that you might see in the real world into a marketing program like this.

Your context is changing. You might be very similar to me. When I get up in the morning, one of the first things I’m checking is my email. I have my iPhone. I’m running into work. I’m at my desktop during lunch. Some of you may be scrambling to get into the ash sale in time and you get a big, giant screen where you’re making your purchases. en in the evening, you’re back at home and you’re reclining on your sofa on your tablet. is is a new world. e next step to that world: I’m wearing an Apple watch and now there’s another way to reach me. Consumers are choosing how and when they choose to engage with you. is is really difficult. If you thought things were tough 15 years ago, keeping up with the vast number of devices and different channels that your customers use is incredibly difficult. This adds a creative burden to any team that is thinking about marketing content and copy and photography and that sort of thing. But it’s imperative to think about some of these.

LDV_20150520_1879_crop.jpg

I want to share just three ideas today that you could use to tailor an experience and create a very contextual experience based on some cues your customers are telling you the moment they choose to engage. Let’s think about the weather. We actually have a giant chalkboard drawing of this cartoon right now. Spring just started and every season we’re changing that up on our blackboard. But the weather changes so quickly and especially retailers have to think about this. Unfortunately, there’s a huge amount of information to crunch and to decide and to figure out how to tailor offers depending on the weather in your area. Here’s one example. We work with Airbnb and it’s possible you’ve seen some of these emails and are completely unaware that the imagery and the creative and the copy are being tailored for you based upon the weather outside.

In this case, it was very important to them, for their customers who happen to be in very cold-weather areas... So if you lived in New York or Massachusetts or anywhere in the northeast this winter, you are right in the bucket, right in the segment that these Airbnb marketers were looking for. It would detect that the moment you open that email campaign, it happens to be snowing outside or it happens to be 15 degrees outside and it’ll give you a very sunny destination that you could think about and houses you could rent. You get a totally different message and creative if you happen to be down in Florida or in California. Similarly, this could be done in retail.

Allen Edmonds again. Most of their customers are in the northeast and into the midwest where it’s very cold. But again, the types of offers they put in front of you would vary significantly based on the weather outside. In this case, a very simple weather-targeting rule. If it’s above 41 degrees, we want to show you rain gear and umbrellas. If it happens to be colder than that, let’s show you something that keeps you nice and bundled up and you might actually get bundled up and get outside of the house. So winter boots, winterizing your wardrobe, and thinking about those types of things. is was something that was formerly very difficult and in some cases impossible because even if Allen Edmonds had someone on their email list, perhaps they never purchased from them before. So you don’t even know where that person lives necessarily. So there’s this vast amount of data, these real-time signals that are impossible for you to even collect, impossible to even have and to be able to tailor against in real time, thinking about the changing creative and the changing nature of your customer.

Let’s jump to a device. When I sit at my desk at work, I’ve got five computers right around me. I have my desktop, I have my tablet, I have my iPhone, I have my Apple watch. There’s probably someone’s old laptop sitting in the corner. It is very hard to guess at where someone is likely to be and how they choose to engage with you. You have to be very responsive to your customers wherever they are.

is is one of the things we did for Comedy Central where the Stephen Colbert show, rest in peace, was incorporating a video into email, but the iPhone experience was very different. It was actually a call to action at the top to download the Comedy Central app and that would only show up if you happen to be on an iPhone.

American Eagle did something very similar. Why waste that valuable real estate and show irrelevant creative if someone is just not going to engage?

Again, on an iPhone—and at the time, they only had an app for the iPhone— you’d see a banner and if you clicked on that, you’d go to the app store and have a chance to go download the American Eagle app and engage directly. On a desktop or an Android phone, the creative is the same. It might be mobile optimized, but it’s completely changed up. is was incredibly effective for them. They actually saw a 230% li in app downloads by simply having that very relevant call to action and having a new way to engage with their customers—mainly, via their app.

Finally, location. Where you are matters. The second you’re opening the email, it’s powerful to be able to see offline brick and mortar stores where you can transact and you can literally get a digital marketing campaign, walk outside the door, see the nearest Steve Madden—especially for products where fit and finish and trying it on really matters—and drive people into your stores and be very tailored in that approach. Steve Madden is one example and the other example, Avaya, which is a UK-based company that lets you use your Nectar reward points and lets you see, the second you open this, where the local restaurants and businesses happen to be. All super tailored and interestingly, that map accounted for 31% of the click so people are really enjoying and looking for geo-targeted content.

Finally, to wrap it up, contextual marketing really has to be about you providing utility for your customers, creating a totally tailored experience, and thinking about the outcomes you want to achieve. at might be something on your website, it might be a more content marketing approach where you’re not hammering someone over the head to buy from you every time, but giving them valuable content.

That’s us. We’re Movable Ink. We’re a four-and-a-half-year-old company right here in New York with offices in London and Buenos Aires. We’re here to change marketing with the use of brilliant and timely photos and images. Thanks, everyone.

Join us at the next LDV Vision Summit.
This panel discussion is an excerpt from our LDV Vision Book 2015

Enterprises Are Leveraging Smart Glasses To Be More Efficient

Jay Kim, CTO, APX Labs ©Robert Wright/LDV Vision Summit

Jay Kim, CTO, APX Labs ©Robert Wright/LDV Vision Summit


Join us at the next LDV Vision Summit.
This panel discussion is an excerpt from our LDV Vision Book 2015  

Jay Kim, CTO, APX Labs

Microsoft and Magic Leap and the baby... I don’t know. These are three really, really tough acts to follow. I’m going to do my best. A lot of what’s been presented, I think, really deals with awesome content and awesome stuff. What I’m here to talk about is what my company, called APX Labs, is doing in the Enterprise AR space and specifically drilling down into a form factor of devices called smart glasses. Just to start, I’d like to actually start with showing a really short clip on how one of our customers is using Google Glass in AR in their wire harness assembly operations.

Video Voiceover: Okay, glass, start a wire bundle. Number 2-0-1.

This is a very high-level example of a real-life use case of how these things are being used today. Obviously, with that dark screen at the top corner of the user’s eye, this is far from Minority Report. is is far from Terminator vision. But as Microsoft and Magic Leap are also showing in the forms of videos, it’s not too unreasonable to think that there’s a lot just around the corner from where we are today. Right now, where businesses are finding the most amount of value in smart glasses and more broadly within the AR context, is delivering the information they already have in the systems that they have spent billions and billions of dollars and decades of time building. In the case of Boeing, imagine how much work- ow actually exists within their databases. Getting that to the people where the work is being done—for people to be able to access that in a heads-up and hands-free fashion-- is a really, really powerful concept.

And from a market opportunity perspective—and these are numbers just within the US across four representative industries, obviously there are a lot more industries where this can scale up to—we are talking about 12 million people who can access technology like this. Even in the crude and rudimentary way that I just showed you in the previous short clip, we’re able to deliver, again, information based on the user context, which is where the vision piece comes in. Vision plays a role in being able to drive enhanced knowledge of user context. And to be able to deliver things like next-generation user interfaces and heads-up and hands-free access to information.

You can do that in logistics settings. For example, a picker in a warehouse. Similarly, in a eld-service type environment—if I’m out there servicing wind turbines, I don’t necessarily want to have to go up to each of the different panels and systems to be able to access data. I can now do that by looking at the different kinds of devices that are out there. And then, of course, in health care, the savings are obvious. e obvious upside is that you could be saving lives. With automotive manufacturing—complex assemblies and things like that—is where we as a company have seen the most amount of traction, because the return on investment associated with this kind of technology can be most easily quantified. So, if you are saving seconds o of a simple task, if you are reducing the error rate of a complex assembly that you are doing so you don’t have to go and get re-work done, there is a very obvious dollar amount that’s attached to all of this.

We are big fans of HoloLens, as far as the devices that have been announced, and I have been fortunate enough to recently try it on at Build. From a sensor technology, this is one of the most powerful kinds of sensors spanning hardware, so ware, and the integration of both into a wearable enterprises are leveragIng smart glasses to be more efficient form vector. I can’t stress that last part enough. It’s really, really impressive what Microsoft has done. Essentially, jamming in a couple of Kinects’ worth of sensors along with advanced cameras, IMUs, and other kinds of radios and processors and actually make it wearable. at singularly is the biggest challenge that a lot of the industry players who have tried to have a product offering in the smart glasses space have faced. It is really, really hard to cram in the requisite amount of sensors, to gather the proper user level context, and then to be able to have it be somewhat comfortable. Maybe this isn’t a mainstream consumer device just yet, but certainly within the context that we play, which is in the industrial applications, there is a lot of appetite for this specific form factor and the capability that this others. This is tremendously exciting for us. We basically consider this, as far as state-of-the-art goes today, as the most advanced device that’s out there. Look at the number of cameras that are there. It is impressive.

From an optics perspective, optics have been a little bit of the chicken and the egg problem in the sense that some of these things that I’m about to show you have existed—it’s just been really hard to drive the price and scale to a point where these can be deployed en masse. So, today what we have is really simple prisms, like you and in Google Glass, where light bounces off a reflector and then a polarizing beam splitter basically just mixes that external light with the light that’s been being driven from the projector.

This is the most common and probably the cheapest form that you can get to—a heads-up display or an AR kind of device. But obviously you are getting Coke bottle kinds of glasses. You probably don’t want that in front of your eyes. And then you’ve got a product like Epson, for example, which is a reflective wave guide. Still somewhat thick, but at least you are able to collapse the lens and effectively drive the eld of view to be able to get something that is a little bit more over your eyes. If you think about these optics as something that’s going to have a 13-to-about-23-degrees eld of view, that’s still not a compelling user experience any way you look at it.

So we go to HoloLens, then you’ve got now stereoscopic displays with the right kinds of sensors that are able to do accurate overlays over some- what of a limited environment. Of course specifications around optics and the parameters are not public yet. e overall user experience that is being driven, coupling with vision and optics technology, is a generational leap over what we have seen so far to date. Then, of course, you’ve got Magic Leap and the very neat fiber-based modulation they are doing where they are able to portray depth.

This is where the technology is going, and really at the end of the day, the goal is to be able to drive a lot of these optics and collapse it into something that is not too unlike the set of glasses that I’m wearing.

Let’s talk about what this all means, more broadly. Smart glasses from our perspective is basically just a way to add a human element back into industry buzz words like “Internet of things” and “big data analytics.” IoT generates a glutton of connected sensors spewing out real-time data at orders of magnitude higher than what we are dealing with today. Of course, the analytic systems are going to have to keep up to be able to make sense out of all of this.

Where we see AR and smart glasses writ large coming into is being able to provide an interface and a mechanism for users to be able to interact with those objects and to be able to do that using the most natural user interface. Fundamentally, there is value in that enterprise today around the form factors that are existing today and around the work ow that exists today. Not too far from here, we are also talking about these form factors from large companies that are getting to the glasses-type of fashion. ere is no question in our mind that consumer adoption of this technology is just around the corner. It’s a start of a very, very exciting market.

Join us at the next LDV Vision Summit.
This panel discussion is an excerpt from our LDV Vision Book 2015  

[LDV Capital is an investor in APX-Labs]

Frequently Acquisitions Deliver NO Money To Founders, Early Employees And Investors

Lane Becker and Evan Nisselson - LDV Vision Summit Fireside Chat ©Robert Wright

Lane Becker and Evan Nisselson - LDV Vision Summit Fireside Chat ©Robert Wright

Join us at the next LDV Vision Summit.
This article is an excerpt from our LDV Vision Summit & LDV Vision Book

LDV Vision Summit 2015 Fireside Chat with:
Lane Becker, Director of Products & Startups, Code For America
Evan Nisselson, LDV Capital


Evan Nisselson: This is Lane Becker. He is fantastic.

Lane Becker: That’s a great opening. Hey, everybody.

Evan Nisselson: You are a phenomenal serial entrepreneur. Love parties but only when there’s certain goals, which we’ll talk about. We talked the other day. Actually, I think I was flying 30,000 feet in the air and you were somewhere else. I saw a Twitter stream and I responded and then we started talking via email. We haven’t talked in awhile. This discussion is about the life of an entrepreneur, the roller coaster life of an entrepreneur. Thee good, the bad, and the ugly.

Lane Becker: Mostly about the ugly.

Evan Nisselson: Is it?

Lane Becker: I think we’re mostly focusing on the ugly today.

Evan Nisselson: We can focus on the ugly, but also in that ugly, there’s a discussion about what’s ugly and how we can do better. It’s a two-way street here. For the audience to know, we’ve both been entrepreneurs for about the same amount of time—18 years.

Lane Becker: Yeah, around that. Since...

Evan Nisselson: 1996, like I said?

Lane Becker: Yeah, 1996, 1997.

Evan Nisselson: I went to the other side of the table after my last company which had an unfortunate disaster, for which I blame myself. The company got to about $3 million in revenue. It was a SaaS platform and we were trying to raise money or sell in June of 2008 with an investment banker. I was chairman at the time and we had about a dozen companies interested in June. In September of 2008 the economy crashed, the potential leads all closed their doors and said that they were no longer interested when the market crashed. Diablo Management was hired to liquidate the company by the majority shareholders. That was actually their name, “Diablo.” Several companies were in discussion with them to acquire the company, including myself. I tried to negotiate with the devil, honestly, to buy back the company to keep it alive but it was not possible during that economic crisis.

Lane Becker: That’s truth in advertising right there.

Evan Nisselson: It was really amazing.

Lane Becker: They know what they’re doing.

Evan Nisselson: The bizarre part, and then enough about me—we’re going to get to you and then back and forth... Actually, Andy, who was our CTO, who’s in the audience, was at the last board meeting. Our typical board meeting was about seven people—normally investors, myself, and a couple executives. And this last board meeting was about 25 people, including lawyers and others on the phone. It was announced about 12 hours prior to the board meeting that Diablo was invited to come to the meeting by the majority investors. And all of a sudden it was a discussion of liquidating the company—of what to do, what not to do, and how to manage liability. Let’s transition to you. So you’ve done a bunch of companies. How many?

Lane Becker: Three. I mean, if you want to go back to the stuff I did in college, four or five, but I’d say three in that kind of classic, Internet startup space. One was a design consultancy. One was an analytics tool, an early analytics tool in the mid-2000s that we sold to Google, and then my most recent company, Get Satisfaction, which I think will be the subject of this discussion.

Evan Nisselson: It will be the majority of the subject, absolutely. Talk about Get Satisfaction. When you were starting it... Or before you start, what was your goal? Why did you want to start that one?

Lane Becker: Oh, it’s a good question. Well, it was 2007, which was a lifetime ago at this point in Internet years. It’s funny. While getting ready for this conversation, I was thinking back to all the stuff that didn’t exist in 2007 when we were starting Get Satisfaction—like Facebook as a platform that anyone used that wasn’t a college kid or Twitter or SAS. Even the concept of SAS. I remember when we were originally pitching Get Satisfaction to First Round Capital, actually Rob Hayes from First Round Capital, who was our first investor early on in his investing career. I remember that I had a slide where I was showing the buying page from Basecamp and I was like, “I think we want to sell like these Basecamp guys do, where there’s three pricing tiers and there’s this one in the middle and we just think that’s a really great way. We’ve never seen anybody sell like that before. I think it would be a really great way to buy online.” I remember Rob, who is a wildly successful investor, going like, “Yeah. at will never work.”

Evan Nisselson: And your response was?

Lane Becker: He is a much better investor now. We actually listened. I would say my entire experience at Get Satisfaction... I also want to take responsibility for the mistakes that we made. I could talk about the mistakes that we made all day.

Evan Nisselson: We’ll talk about some of those, too.

Lane Becker: When it comes to our investors, I would say that the mistake that we made was that we listened a little too o en.

Evan Nisselson: How do you choose when to listen? I had similar challenges as well. We all do. It’s life.

Lane Becker: I don’t know. Thor, Amy, and I came up with something... or and Amy are a married couple, the Mullers, and we came up with what we called “The Muller-Becker Rule of Investor Advice,” which is that you should always assume that the approximate percentage of advice that your investors give you that is correct is equivalent to the percentage of your company that they own. Good luck figuring out which percent that is. I don’t think there’s a great rule.

Evan Nisselson: There’s not, that’s why I asked—because you’re smarter than I am.

Lane Becker: I talk to a lot of people about how to manage boards since the experience of Get Satisfaction—and particularly the experience of not managing our board particularly well at Get Satisfaction—and I think it really comes down to just really knowing what it is that you believe in or what you care about and making sure that is represented and then weighing all the advice that you get relative to that. These are people who are spending part of their time looking at what you’re doing. You’re spending all of your time looking at what you’re doing. You’re the one with deep knowledge and deep experience in that area. It’s valuable, but you always have to gauge it against

What is the core of what it is that I’m doing? and How do I apply it against that and decide? I realize that’s really abstract. It’s also almost impossible to do, especially relative to your investors who definitely have a sort of power or authority position over you.

Evan Nisselson: I think that makes a lot sense but I just realized, we should probably take a second and back up. What was Get Satisfaction in the beginning? The goal of it, the funding that came in, and the recent outcome, which is what sparked our discussion when I was flying in the air and saw a Twitter stream discussion that I thought was very valuable.

Lane Becker: Get Satisfaction was a customer service community platform. We came up with the idea in 2006, 2007, looking at online forums and seeing all the conversation that was happening about people who had products and sharing and having ideas and sharing problems and getting solutions from other customers. So the original idea for Get Satisfaction was a consumer site that was basically customer service without companies. How do we create a space where customers can share ideas, answer questions? at sort of thing. It did pretty well initially and early on, but very quickly, one of the things that we had done is we’d created a mechanism for employees to... is is our little accidental growth hacking thing, what would now I guess be called growth hacking. We had created a way for employees to self identify so they could come and they could say “Oh, I’m an employee of Yahoo, I’m an employee of Apple” or whatever. ere were no Apple employees that showed up, at least officially. There were plenty that showed up unofficially.

Evan Nisselson: Maybe they’re here unofficially, too.

Lane Becker: We would give them a badge and they would start answering questions alongside the customers. It was actually very effective. Again, this is prior to Twitter and Facebook becoming customer service platforms. at was still just a twinkle in someone else’s eye. I think we were very directionally accurate in that sense about what the product needed to be, but kind of basing it on old technology forums instead of thinking about where it needed to go. But the thing that did happen that was so interesting is that once the employees started finding out about us... We’d done a really good job on SEO on the pages. Mid-2000s, SEO is still kind of a thing, and it turned out that all of these marketing and customer service types in all of these different companies had actually set up Google alerts for the name of their company and we had made sure that the name of their company was very prominent in the page. Basically, what we had developed is that as soon as someone came in and asked the question about, say, a Samsung product, it was like having a direct line into all of marketing and customer service people in Samsung because they’d all set up these Google alerts and it was SEO’d so it would show up relatively high. We developed this really fantastic way to get access to all these people and so suddenly we had all of these employees from all of these companies, some fairly high up, who are self identifying and answering questions around these products. at was the point at which we started having a conversation with Rob and some of the other folks on our board about, Maybe we should start to turn it in this direction. Maybe we should go back to that original idea we had about selling this as a product, as opposed to the place that we had initially pushed it based on our investor feedback, which was towards more of a consumer environment.

Evan Nisselson: At that stage, you were making how much revenue?

Lane Becker: Oh yeah. We were making no revenue. is is Digg-era days of trying to get big fast with a consumer platform. So much of this ends up being subject to the sort of vagaries of the moment. Do you know what I mean? What’s the hot thing? What’s the direction that everybody else seems to be going? Where can we pattern match today? We were kind of flip-flopping around a little bit based on that. Personally, I think if we just stuck with our initial idea of following the Basecamp model and getting into the SAS approach earlier, we actually probably would’ve been in much better shape in 2008 when everything tanked. It’s funny, listening to your story and my story...


Evan Nisselson: Well, not really funny.

Lane Becker: Well, I’m going to go with funny because I have no choice but to laugh at this point. My childhood dream was to be a cautionary tale for others. Here we are up on stage. In 2008, the economy tanks and there are these macro conditions that have fuck to do with anything that any of you in the room are doing. We can blame 2008 entirely on a bunch of guys wearing ill-fitting suits sitting about a mile and a half that way. at way? at way? I’m lost, orientation-wise. Again, what they were doing was terrible. It had nothing at all to do with what I was doing or with what you were doing, but those macro conditions totally influenced our ability to succeed. In your case, it was your ability to sell. In our case, it was our ability to raise a series A, which had been going quite well up until that point. Suddenly, the money’s not there. So I’m scrambling. I end up asking some of our better-o friends... I mean, I did all sorts of things to keep the company going through this period, like asking people for money.

Evan Nisselson: Tell us a couple of those things that you did.

Lane Becker: Actually, one of the things that I did which was fantastic advice that I got from one of the guys who would later become our investor, Josh Felser from Freestyle... Josh is amazing. I would totally recommend taking money from him and that’s a very short list of people that I have for that.

Evan Nisselson: You’re an advisor to them.

Lane Becker: Yeah. I was for their first fund.

Evan Nisselson: They’re also an investor in one of our companies, Camio, and the CEO, Carter, is in the audience. Here he is, right in the back.

Lane Becker:  That’s right. Everybody likes Josh. Josh has this thing that he says, which I think every investor should say but very few do. He says, “We will always have an agenda, but we promise to share it with you.” So when he feels like his interests are going to diverge from your interests, he will point out how that is happening and why he is lobbying for something separate from you, which is terri c because there are always times that your investors are going to diverge in their interest from you. Always, in every company.

Evan Nisselson: Is that solely a personality trait or is that because he’s a serial entrepreneur?

Lane Becker: I think it’s a personality trait but I also think it is both.

Evan Nisselson: Both.

Lane Becker: Yeah, I’ve watched them win all sorts of investment opportunities based entirely on the fact that he’s able to empathize with the people that he is talking to far more than somebody who’s never actually run a business in their life. He suggested that we have a dinner party and that I invite all my high net worth friends to come to the dinner party and basically just tell them, “Hey, I’m pitching you an investment and dinner will be terrific, but if you’re not interested in investing, we’ll have other dinner parties. Don’t bother to show up.” He was like, “80% of the people won’t show up and then 20% of them will and the 20% that show up are all going to invest.” Which was completely, 100% accurate. at is exactly what happened.

Evan Nisselson: Wow. that’s great.

Lane Becker: It was. This is like December of 2008.

Evan Nisselson: Did you believe him or was that like, I’ll give it a try. I don’t think it’s going to happen? At that stage, it was kind of like I’ll do anything to try and raise capital to keep the company alive?

Lane Becker: I was already in that I’ll do everything to raise capital to keep the company alive place. I was very much there. In fact, this story has a very sad ending in which we sold Get Satisfaction to a company called Sprinklr, probably like six to eight weeks ago. And every early investor, every employee, all of the founders got completely washed out, so none of us see a dime from the sale. e only people who make anything o of the sale are, interestingly, all the people who are still sitting on the board: later investors, the current CEO, and the former CEO. It’s funny how that works. Actually, the mechanism by which that works is one of the things I want to talk about. We should get to that. One of the things I have observed through the experience of being more public about this than most people are when their company sells (but not really) is how much opacity there is. We have so much transparency and we talk about our world as so transparent and open, and it certainly is relative to the way investing worked in the ‘90s, for example, but I want to be transparent and open about things. Like in the beginning, there’s still this surprising shroud of secrecy and uncertainty around how things end when they don’t end well, which is most of the time, or at least they don’t end fantastically. It turns out there are things you need to know about that part, too—including, for example, the way we got screwed, which is one of the many ways you can get screwed in a sale where not everybody’s going to see something from it.

Evan Nisselson: We’ll talk about that in a second. Let’s lead up to it. You evolved your role with the company from what to what and when?

Lane Becker: Well, I was always co-founder and chief product officer. I didn’t actually take the CEO title but or and I used to joke... he was the CEO of the company. We used to joke that he was the peace-time CEO and I was the war-time CEO. He was really good when things were going well. I was really good when you needed somebody to get kind of pissed off at board meetings. I sort of ran that piece of it and the fundraising piece and everything that happened post 2008. or was still very much the CEO of the company.

Evan Nisselson: A couple of years ago, you switched again. Correct?   

Lane Becker: Yeah. So what happened to us post 2008 is that we were basically told by our board, “Okay, it’s great that you guys are having fun with this little consumer toy but it’s time to bring the adults in.” is is kind of classic Silicon Valley behavior prior to the Zuckerberg-Sandberg-Andreessen-Horowitz era, where their one-two punch of Sheryl Sandberg coming in underneath Mark Zuckerberg instead of on top of him, which is what they would’ve done in years previous and what they would’ve done, frankly, if Zuckerberg hadn’t had Peter Thiel advising him on how to structure his board... It’s true, he was 20. He would’ve gotten screwed if he hadn’t locked into some good advisors; Andreessen Horowitz opening up and saying, “We believe in founders. We think founders need to stay in charge of their businesses. We think a good VC teaches a founder how to become an investor.” at is all absolutely, 100% true and came a couple of years too late for us. So in late 2008, they’re basically like, “You need to raise money and prove to us that you can raise money in this totally fucked up environment or we’re going to lose all faith in you. And oh, by the way, also we’ve lost all faith in you and we think you need to get an adult in here.” Those are sort of the messages...

Evan Nisselson: It doesn’t sound like it was an option. It was a way of phrasing.

Lane Becker: Right. The thing is, if I could go back and do it again...

Evan Nisselson: What would you do differently?

Lane Becker: I would just tell them to fuck the hell o . Seriously. And you know what? I think that’s kind of what they wanted me to tell them. I actually ended up having a conversation with Rob Hayes years later about Travis Kalanick, who, let’s be honest, is like clearly the most successful asshole billionaire in the industry, right? Really plays it to the hilt. is is long before Uber is in any other city besides San Francisco. Rob Hayes, to his credit, was one of their seed investors at First Round, so nice work Rob. He told me the story about Travis Kalanick that really stuck with me, and this is after I had left Get Satisfaction, so it was probably in 2010.

Evan Nisselson: You left and you were still on the board or you weren’t?

Lane Becker: I left. I got pushed off the board first with the series A round and then or got pushed o the board in the series B round. Rob tells me the story about how there’s a board meeting he needed to reschedule because he had a conflict so he has his assistant call Travis. Travis probably didn’t even have an assistant at that time. Call Travis and say that Rob needs to reschedule the meeting and Travis says, “Fuck you. We’re not rescheduling that meeting. I’m putting my time and energy into this. He needs to put his time and energy into this, too.” Rob was like, “I have so much admiration for Travis for doing that.” And I realized, “Oh, that’s how we fucked up. Rob’s a bottom.” Clearly, that’s what he wanted. He wanted me to dominate him because that’s what venture investors want from you. Now my attitude towards this sort of thing is to go to a BDSM metaphor. or, who was always the sort of more politic of the three of us, Thor’s take on it is that your investors are always testing you. And that was the test. In that sense, we failed that test because in that moment, we weren’t forceful or aggressive enough. We weren’t doing the things that they needed us to do to see that we were passionate or committed. And I actually think, as fucked up as that is, it’s also totally, totally true.

Evan Nisselson: I actually...

Lane Becker: I usually don’t swear this much.

Evan Nisselson: The subject is relevant for swearing. I’m sure I could loft some out there as well, depending on the question you’re asking me.

Lane Becker: Apologies if you have delicate ears.

Evan Nisselson: No, the kid left earlier so he’s no longer here. That was Serge’s kid that I invited and gave a little name tag, if anybody didn’t know. What is he, three weeks? A month old, a month and a half? So that brings up the point. Let’s finish the story of the situation and go back to talking about investors and the crux of how you got screwed. All of a sudden it sells—the company. And you’d been out.

Lane Becker: I’d been out for awhile.

Evan Nisselson: You probably had limited knowledge of what was going on.

Lane Becker: Limited.

Evan Nisselson: You’re still the shareholder.

Lane Becker: Limited and as we’d argued, deliberately incorrect knowledge of what was going on.

Evan Nisselson: And you were out for how long?

Lane Becker: I left in 2010. Thor left in 2011 and I believe Amy left in 2012 or 2013.

Evan Nisselson: So over the last two to five years...

Lane Becker: Left.

Evan Nisselson: Right, and still had equity. Not a lot.

Lane Becker: Still had equity. Sort of ever decreasing.

Evan Nisselson: Decreasing, recapping, and other things.

Lane Becker: By the end, probably the three of us collectively owned between 7% and 10% of the company, depending on the day.

Evan Nisselson: Then all of a sudden news hits. Get Satisfaction is sold.

Lane Becker: Right.

Evan Nisselson: Tell me the story just before that, because I think there was some behind-the-scenes to that. How did you find out and then how did it evolve to all of a sudden having an extensive discussion with entrepreneurs around the world on Twitter?

Lane Becker: is is where the story gets kind of gross and ugly.

Evan Nisselson: And that’s why I asked.

Lane Becker: Yes.

Evan Nisselson: I’m sorry. at’s why we’re here.

Lane Becker: I don’t mind. What the hell. We’re here. We found out because we had maintained, even a er leaving the organization, we had actually maintained pretty close ties with a number of employees, which is what I would recommend to everybody even if and especially if you end up getting shoved out of your own organization. Employees on the ground usually know what the hell is going on. In this case, we actually found out from an ex-employee who had also done the same thing, who had maintained tight relationships. And apparently, the employees at Get Satisfaction had been explicitly informed not to tell any of the founders that the sale was happening because the current management, I won’t be more specific than that, had decided that we were a liability in this situation and they weren’t going to tell us that the sale was closing until—it’s unclear to us—either exactly the day it was closing or perhaps the day a er it had closed. So they had just cut us out of the loop entirely.

But this ex-employee had caught wind of it and he had no reason not to tell us so he just called us up and he was like, “Hey FYI, your company’s selling.” That is just the shittiest way to find out that your company is selling. At the time, though, we assumed, Okay well, we’re probably getting washed out, right? Because why wouldn’t they be telling us if they were going to get us even a little bit of capital? Now, up until this point, our understanding has been that revenues had sort of leveled o . We knew that they were struggling. We understood that they’d done a convertible note with really onerous preferences towards the end in an attempt to keep things going. But again, they were like happy, happy, rosy, rosy in their conversations with us.

What I understand now is that it was 3x... A convertible note with a 3x liquidation preference was there primarily to ensure that only the people that participated in that note were going to see anything from it, because they had done a successful job of hiding how badly, frankly, they had managed the business, how o -a-cli the revenues appeared to have gone. So they were all kind of scrambling to make sure that they were going to get their money or they were going to be able to get something out of it. at’s what that last year actually was—not an attempt to get the company back on its feet, but an attempt to basically steer it in a direction that was going to guarantee the maximum outcome for the people that still had some insight into what was happening: the people sitting on the board.

So we find this out. or emails very politely their CEO and CFO and says, “Hey, you mentioned awhile back that you were thinking about maybe acquisition or fundraising. How’s that going? Can we maybe talk to you about it?

Come in and talk to you about it?” One of them, the CEO or the CFO, writes back and says, “Yeah. We’re really busy this week. Why don’t you come in next Wednesday?” or was like, “So, funny story. We actually know what’s happening and we think you should bring us in much sooner.” And all of a sudden they’re like, “Oh yeah, you should come in on Friday.” So we go in on Friday and we know that we’re going to get washed out. There’s no way they would’ve been screwing with us as much as they were if we weren’t going to get washed out. But we go in basically with an argument like, “We think, ideally, you should at least recognize that the common stock is getting completely washed out in this instance. We still think you should give something to the common. Even if you’re not going to give something to the common, even if it’s just like pennies on the dollar of the sale in recognition of all the people who have put so much time and effort into this....” is, by the way, is not uncommon behavior even in the situation where common gets washed out. Frequently, in order to maintain relationships or in recognition of the work that’s been done or to just not look like an asshole, the preferred stock will actually throw something to common anyway. That’s actually something that does happen. Not in this instance. We were like, “Okay, even if you’re not going to give it to common, at least recognize that we are the founders of this fucking business and we put a ton of time and energy and capital into it and we would like to see some small token thing just in recognition of that. And hell, even if you’re not going to do it because you’re nice people, you should do it because why the hell else would we support this sale? What is the point of us supporting this sale?” And the CEO’s like, “Oh, but it’s your baby. Don’t you want to see your baby make it out into the universe?”

Evan Nisselson: It’s his argument that if you don’t support it, it goes out of business?

Lane Becker: No. His argument was “suck it up,” basically. He basically says, “Okay, I’ll go back to the board.” Oh, so the other thing we learned in this meeting on Friday, besides the fact that we’re getting washed out, is that the sale’s closing on Tuesday. And we were like, “Wait a minute, you told us you didn’t want to talk to us until Wednesday?” Like, what? Jerk!

Evan Nisselson: You probably used a stronger word than that. You don’t have to use it here.

Lane Becker: No. There was this really funny part. The worst part about this meeting is that I go in, or goes in, Amy goes in, and we know the whole point of this meeting is for him to deliver us this shit news that we’ve been washed out. But before he gets to that, the first 20 minutes of the meeting is spent with him grilling us on which employee told. “Which employee told you? Which employee told you?”

Evan Nisselson: So that situation was horrible. Let’s jump to...

Lane Becker: He says, “We’re going to ask the board.” e board basically comes back and says, “No. We’re not going to give you anything. We’re not giving common anything but we do expect you to support the sale.” And it was in that moment that I realized this happens. It’s so frequent. I end up going out on the morning of the sale... It’s Tuesday morning and this congratulations note hits my phone and wakes me up at 6:00 in the morning because this news has gone up on the wire that Get Satisfaction is sold. So all of my friends start doing the thing that you would totally expect them to do because they’re your friends, which is they start sending congratulatory notes. I was just not in the mood for it so I wrote this thing on Twitter, which I have to say I did not expect to get the kind of pick-up that it did. I was basically just like, “Hey everybody, I appreciate the sentiment but don’t congratulate me on the sale because the founders got totally washed out and we got nothing.”

Evan Nisselson: As you are a very genuine and sincere person. at’s what the message was, but to everybody else, that’s unusual in Silicon Valley.

Lane Becker: I know.

Evan Nisselson: Unfortunately—or in most of the ecosystem.

Lane Becker: No, I know that’s accurate actually because one of the very, I don’t know if “amusing” is the right word, but one of the really fascinating outcomes of this is that I got a lot of private messages from a lot of successful entrepreneurs who all said something along the lines of, “Wish I could pretend I didn’t know what you were talking about.” It just made me realize in that moment: this is absurdly common.

Evan Nisselson: I was on the plane and I saw the thread, which had many, many comments—I don’t know if you have any numbers, I mean dozens, hundreds... It just started going and going and going and going.

Lane Becker: Yeah, it was great for my follower count.

Evan Nisselson: Anyhow, it started and I felt so bad. I know what it’s like. A friend was going through this and I actually belabored for like 20 minutes on the plane: Do I post publicly? What can I say? What would be appropriate? What would be right? What would be helpful? And then I just sent you an email and that’s where we started a back and forth thread. A lot of people voiced that they thanked you for sharing that.

Lane Becker: Yeah. It made me feel great, actually.

Evan Nisselson: But now that it’s out, and you look back... So talking about the investors. First question actually, most importantly: You mentioned earlier there’s a lot of negative results. There’s only a small percentage of successes. Our audience is all trying to build businesses or build technology. Are you going to do another one?

Lane Becker: Oh, yeah. I would totally do it again.

Evan Nisselson: Perfect, I was assuming that was your response because there’s only one answer for a true entrepreneur. But now, looking back at that, what would you do differently next time?

Lane Becker: We covered the one point, right, which is that I would have had a lot more independence and this is something I think comes with age.

Evan Nisselson: Independence?

Lane Becker: From the board. I would’ve set things up better so that I would’ve been able to maintain control because I understand how to do that now. And then I would’ve actually maintained control because it’s not just about the percentage ownership or how much money has been invested. It’s also about the more subtle social ways in which investors can create pressure on you. I mean, at the time that we gave up the CEO role of Get Satisfaction, technically we still owned more than 50% of the company. We didn’t have to do that. It was far more the intimidation factor of it that made us do it than anything else. I just feel much more prepared for that sort of thing these days.

Evan Nisselson: I had a similar situation where, unfortunately, we almost had a large financing and it blew up in the 12th hour. And all of a sudden there was a transition where the top investors and the new CEO said, “It’s best if you don’t come into the office after you transition to chairman. It’s best for everybody.” And actually, it was a very difficult period because I wanted to do what everybody thought was best for the company and at the time I was not sure that was the right decision. Learning hindsight is 20/20, but now I realize the decision of not going into the office after that transition situation was a disaster and wrong on many levels. How else can we learn if it’s not from our own challenges? Is it advisors? How do we know which one’s the right advisor? Is it just that we have to go through this crap and become better at the other side and hopefully it’s not disastrous?

Lane Becker: Well, I definitely think learning from other people’s experience is better than learning from your own experience.

Evan Nisselson: That’s a fucking rhetorical question. And see, I cursed. I knew it would come up when I started talking about this. How would you answer that differently?

Lane Becker: If I were doing this again today, I would definitely have a much stronger support network. There just weren’t as many of you in 2008, 2009 honestly. I would definitely build a much stronger support network. Or I would have come up through a much stronger support network, like say an accelerator-type structure that gave me access to other people who I could talk to and work with. I think that is huge. e other thing I would do differently today is I wouldn’t panic as much as I did then.

Evan Nisselson: I panicked all the time.

Lane Becker: Yeah. And now, I’m like, “Well, it’s just a company.”

Evan Nisselson: One of the things I try to do now after successes and failures... I am a mentor to about five accelerators and try to help others avoid the mistakes that I did. You can never dictate. I think it is best to share stories which can share perspective for others to make their own conclusions. at’s the same thing I do as an investor. You’ve met some investors now who you would definitely work with in the future and you probably have signals that say, “Oh no, not that one.” Tell us a little bit of how you would choose those signals. I know we’ve got a couple of minutes left and need to wrap up, but this is a fantastic discussion to help others avoid the mistakes that we might’ve made.

Lane Becker: I just think your investors are essentially your boss. For all the blah in this industry about how you get to be your own boss, that’s really not true. You have people that you report to, right? And your investors are those people, and so like any situation where you’re going to have a boss, the question is: Do you like and trust this person? At the end of the day, that’s really what it comes down to. Are you able to communicate with them in a way that feels meaningful? Have they done things, have they said things that are indicative to you in some way, shape, or form that you can trust them? Do they talk about things as if they were experienced? Do they have an experience that you can relate to, because that’s one of the fastest ways that you can form a trust relationship with somebody? I think that’s the reason why Josh and probably you were successful with a lot of the companies that you invest in is because that resonates with them. I just look for that human connection in this situation and some sign that they’re not just some autonomic bottom feeder and just trying to sort of take what they can and then run o , which unfortunately, the business world has quite a few of.

Evan Nisselson: I think that’s great advice. We could talk for a long time, but I want to end on this: Is there anything else that from your experience you’ve learned that you would give as concrete advice to anybody in the audience who is either building or wants to build a company?

Lane Becker: I think you’re better o when you think about venture capital as a game or taking venture capital as a game. I recognize that it has all sorts of real-world inputs for a lot of people. Actually, starting companies is one of the ways that they aspired to class-jump, which is very hard to do in this country. We have a surprisingly static class system and Silicon Valley is still very aspirationally one of the ways that you can do that. So you’re thinking about your future, you’re thinking about the people that care about you— there’s all sorts of reasons. I recognize the gravity of it. At the same time, you are going to be better o if you can treat it as if it wasn’t the most important thing in the world. You know what I mean? Entrepreneurs are always at a disadvantage in negotiating situations with investors because the person who wins a negotiation is always the person who has less of an emotional investment. Always. You can go into any negotiation and if you have more of an emotional investment, you will lose. That’s just kind of the deal. You have to figure out how to manage that and to me, managing that means pulling yourself away from it as much as possible, looking at the situation dispassionately and understanding that if it’s a game, there are rules. It is a system. It’s kind of a weird game because some of the rules aren’t necessarily as apparent to you as others. If you can, treat it as a game—understanding the system and the system dynamics, understanding the person and their motivations and intentions, and understanding how they’re going to align or not align with yours. On the one hand, I want to say, “Get to know that person as a person and trust them,” and on the other hand, I want to say,

“Step back from the situation and recognize that it’s not just about the two of you as individuals.” It’s actually about the system that you’re participating in and the expectations that you have and that your LPs have and your returns on capital and the margin macro economic situation.

Evan Nisselson: So it sounds like you have to do both and there’s a fine line between getting to really know them and taking a detached perspective. I think that’s fantastic advice. It’s probably everything in life. But in this situation, investors, co-founders—it’s very critical to have that view without blinders on.

Lane Becker: Yes. I don’t think it’s surprising that so many of the startup founding teams that manage to build these sort of wildly successful companies, they’re on their second or their third or even their fourth company before it kind of hits. e Twitter team is a great example of this. It is the act of working with these people over time, learning their strengths and their weaknesses, figuring out how you can trust them, figuring out where you can lean on them, that makes you quite successful. I have no doubt that that’s true over repeat investments as well.

Evan Nisselson: Right. is had been fantastic. Thank you very much for sharing the ups and the downs with transparency. You’re going to be here for the rest of today?

Lane Becker: Yeah, I’ll be around.

Evan Nisselson: Others, if you have questions, we don’t have any more time right now but Lane is here. He’s fantastic. Thank you for sharing and enjoy the rest of the Summit.

Lane Becker: Thanks for having me.

Evan Nisselson: Thank you, Lane.

Lane Becker: Thanks, everybody.

Evan Nisselson: A round of applause, guys.

Join us at the next LDV Vision Summit.
This article is an excerpt from our LDV Vision Summit & LDV Vision Book

 

Photographs Can Tell A Powerful Story That Is Unique To Any Other Creative Format.

@Bijan Sabet

@Bijan Sabet

Bijan Sabet is co-founder & general partner of Spark Capital. We are honored that he will be one of about 80 expert speakers at our next LDV Vision Summit May 24 & 25 in NYC. We are starting our fireside chat with Bijan virtually and hope you join us at our summit next month to hear the extended live version.

Bijan is a serial entrepreneur, successful venture capitalist and photographer who prefers making pictures with his Hasselblad camera. All businesses and humanity are being disrupted and empowered by visual technologies such as augmented reality, virtual reality, computer vision, machine learning, artificial intelligence, video monetization, content creation, medical imaging, satellite imaging and much more. 

Evan: I used to make pictures with a Nikon F & FM and then gave them up when I wrote an article in May 2003 that cameraphones would replace point and shoot cameras. I have many questions and I am always fascinated to hear the wisdom of others who are smarter than me. That is one of the main reasons we gather experts to share their vision every year at our LDV Vision Summit. 

What is the most valuable visual content? When do people prefer to watch videos or photos? Will people prefer to enjoy 2D, 3D, 360, virtual, augmented or the next new new type of visual file format? When will cameraphones have the quality of DSLR's? When will we swallow camera pills to photograph inside our body in real-time? Why and how do people choose which camera they wish to make pictures with today, tomorrow and in the future. 

So, Bijan, you are an avid photographer who makes pictures with digital and analogue cameras. What was your favorite camera 20 years ago? 

Bijan: In those days, my favorite was an Olympus Stylus, a simple but beautiful 35mm film camera. 

 

Skateboard Park, Venice, California. Leica M3, Kodak Portra 400 © Bijan Sabet

Skateboard Park, Venice, California. Leica M3, Kodak Portra 400 © Bijan Sabet

Evan: What is your favorite camera today and why? What do you think will be your favorite camera in 20 years?

Bijan: My favorite camera today is my Hasselblad. It's a medium format camera with manual everything and no batteries. It makes 12 exposures per roll of film. Every photograph takes time and purpose. The lens on that camera is the most beautiful lens I have ever used. I just love it. 

I hope this old Hasselblad is my favorite camera in 20 years as well. The funny thing about film is that this Hasselblad could very well be my favorite a few decades from now. Can you imagine using a digital camera from 5 years ago, never mind 20 years. But with film it's more than possible. The "sensor" stays the same and the lenses are already the best. 

Old Car, Mission Street, SF: Hasselblad 503cw, Kodak Portra 400 ©Bijan Sabet

Old Car, Mission Street, SF: Hasselblad 503cw, Kodak Portra 400 ©Bijan Sabet

Evan: Why do you sometimes choose to shoot with film versus digital & visa versa?

Bijan: I always prefer to shoot film. I love the feel, the look and my emotional connection to the process. I only shoot digital when my iPhone is the only camera I have or when I need to shoot with a DSLR to capture my kids during a sporting event. 
 

David Karp: Leica MP, Leica 50mm Summilux, Kodak Tri-X 400 ©Bijan Sabet

David Karp: Leica MP, Leica 50mm Summilux, Kodak Tri-X 400 ©Bijan Sabet

Evan: You have invested in people building businesses that leverage visual content such as OMGPOP, Tumblr, Twitter, Lily Robotics among others. What are the most valuable attributes of visual content?

Bijan: I am drawn to tools, products & communities that allow & encourage creatives to express themselves. Often that is quite visual in nature. I think visual content, specifically photographs, can tell a powerful story in a unique, compelling way that is unique to any other creative format.

Ireland tunnel: Hasselblad 503cw, Kodak Portra 400 ©Bijan Sabet

Ireland tunnel: Hasselblad 503cw, Kodak Portra 400 ©Bijan Sabet

Brooklyn Bridge: Leica M9, 35mm Summilux ©Bijan Sabet

Brooklyn Bridge: Leica M9, 35mm Summilux ©Bijan Sabet

Evan: You frequently experience startup pitches. What is your one sentence advice to help entrepreneurs improve their odds for success in the LDV Vision Summit Startup Competition?

BC Grad Tech Club @davidloverme

BC Grad Tech Club @davidloverme

Bijan: Be yourself and make something amazing. 

Evan: I look forward to speaking with Bijan and all of you in more detail during our fireside chat at our LDV Vision Summit in NYC on May 24 & 25 [50% discount off tickets until April 30]. We try to make our sessions very interactive and look forward to your questions. 

Other expert speakers at our Summit are from Google, Refinery29, Facebook, Cornell Tech, Qualcomm, First Round, Lytro, Greylock Partners, Olapic, Quartz, Mapillary, Microsoft Research, CakeWorks, NBCUniversal, RRE Ventures, Magic Leap, Mine, Samsung, Enlighted, Flickr, IBM Watson, and many more….

Hope you will join us and look forward to speaking with you! 

 

Entering competitions increase your odds of being recruited, raising capital, or selling for more than $100M.

[reposted with updates for our next LDV Vision Summit May 24 & 25, 2016 in NYC.]

Every day, there are many hackathons, startup competitions, and events around the world with different styles, focuses, attendees, locations, and goals.  

I invest in entrepreneurs at a very early stage and mentor many other entrepreneurs via accelerators such as 500, Seedcamp, Founders Institute and NYSeed. Entrepreneurs from different countries frequently ask if they should attend a conference or compete in a competition or a hackathon.

My question to them is always the following: Will that event help your reach your goals?

Events are great places to meet co-founders, clients, and investors. In addition to networking, competitions and hackathons are also great opportunities for you to prove your brilliance and skills with a prototype, algorithmic solutions to computer vision challenges, or visions for a new startup. Investors, co-founders, recruiters, and clients are always looking for the best people, and we all have the problem of filtering out the noise to find the high-quality signals. Surpassing other competitors to make the finals and possibly win is another valuable validation to help rise above the masses.

The worst thing you can do is sit behind your computer or in the audience and do nothing. It’s easy to say, “I had that idea years ago,” but maybe you didn’t do anything about it. Maybe you didn’t take a risk to prove yourself. If nobody knows, then it never happened -- very similar to the fact that nobody can hear a tree fall in the forest unless you are watching it fall.

The key is to be sniper focused on attending and competing where it is most contextually relevant for you to have the highest impact in reaching your goals. If you are working in wearable devices, then you should go to the events that are focused in that sector. If you are working with photos, videos, computer vision, deep learning, artificial intelligence, satellite imaging, medical imaging, autonomous cars and anything with the visual web, then you should go to our LDV Vision Summit and compete in our competitions.

LDV Vision Summit 2015: 6 of the 8 companies that competed in our 2015 startup competition have since raised capital including StreetBees, PartPic, Entrupy, Stringr, The Smalls and Sphericam raised $450K on Kickstarter. 

"We met Russell Glenister last year at The LDV Vision Summit who has since invested in The Smalls. The Summit has had a huge impact on our business which is now completely taking off! Anyone working in Visual Technologies should definitely compete and attend for the incredible networking opportunities with top-tier experts and investors.” Kate Tancred, CEO The Smalls.

"It was a great pleasure to introduce Streetbees - the world's intelligence platform - to a distinguished jury at the LDV Vision Summit. Anyone working on a visual technology business should definitely apply to the Summit startup competition." says Tugce Bulut, CEO of StreetBees.

"Judges Awarded Me 3rd but I Personally Won" by Erik Erwitt.

Other fantastic outcomes by speakers at our 2015 Summit include Apple acquiring Emotient, Magic Leap raised ~$800M led by Alibaba, Ramp Media was acquired by Cxense, APX-Labs raised $13M led by NEA including GE Ventures, Salesforce Ventures, LDV Capital and Mapillary raised an $8M Series A led by Atomico including Sequoia, Playfair, Wellington and LDV Capital.

LDV Vision Summit 2014: At least 9 companies who presented either in a competition or as a speaker have since raised significant venture capital funding, and at least one was acquired. Check out the following examples: Magic Leap raised $542M, led by Google Ventures; Taboola raised $117M, led by Fidelity; Narrative raised $8M Series B, led by Khosla Ventures along with True Ventures, Passion Capital, LDV Capital; Placemeter raised $6M Series A, led by NEA; Neon Labs raised $4.1M Series A, led by Mohr Davidow; Mapillary raised $1.5M Seed, led by Sequoia along with Wellington, Playfair, LDV Capital, and angels; Seen.co raised $1.25M Seed From Horizons & KEC; Clarifai raised funding from Google Ventures, Qualcomm, Nvidia, LDV Capital, and angels; and Aviary sold to Adobe. I am sure that they are all looking to recruit great people, and they are expected to be in the audience at our next summit.

I often speak at conferences and am always amazed when people do not have questions in the audience, even though they all want the speakers,  judges, and companies recruiting to know who they are. Silence gets nobody noticed. When there is silence during the question period after a presentation, I typically say, “The most intriguing and brilliant questions will inspire others to come find you and talk afterwards, but if you say nothing, then it’s impossible for the rest of us to know why you are brilliant.” After saying this to the audience, many more people ask start asking questions, and the serendipity gets more interesting for all involved.

Of course, not everybody will build a success story the first time they enter a competition, but you should think of each competition entered as one more step closer to your goals. Here are some exciting competition success stories.

Courtney wrote an article in September 2012 which said, “Sunrise co-founder Pierre Valade first hit the tech stratosphere with Agora, a service he built during a Foursquare hackathon in February 2011 that’s designed to connect you with like minded folk when you check in at a specific location.” Then he joined Foursquare and focused on Sunrise with his team, and about 3 years later, Microsoft acquired Sunrise for at least $100M.

Obviously, one cannot connect the dots to the success of Sunrise purely from competing at a hackathon, but that is how Pierre started working at Foursquare -- and the rest is history. Overnight success stories do not actually occur overnight. Typically, it takes many roller coaster milestones that are connected into a line leading to success.

Another great hackathon success story is GroupMe. They launched at a Disrupt NY Hackathon and 370 days later sold to Skype for $85 million

Ramen won the Launch Hackathon in San Francisco. “That victory exposed Ramen to potential investors and customers and created a helpful amount of buzz,” said co-founder & CTO Angilly.

One of our LDV Capital portfolio companies, Clarifai, won the top 5 winning spots in the image classification task at the ImageNet Large Scale Visual Recognition competition. They have since raised funding from Google Ventures, Qualcomm Ventures, Nvidia, LDV Capital, and several top-tier angels.

Geoffrey Hinton and his team at the University of Toronto won the 2012 ImageNet competition, and about a year later, they were hired by Google. Some say this ImageNet milestone was widely regarded as the moment that deep learning broke through from the machine learning community into mainstream computer vision.

Carousell founders conceptualized their business at the 2012 Startup Weekend in Singapore and later raised $6M in funding led by Sequoia.

Zaarly was first built at a Startup Weekend competition in Los Angeles in 2011. They have since raised more than $15.1 million in funding. Of course funding is not the goal but a great means to help reach your goals.

There are many benefits of joining competitions, including meeting potential co-founders, meeting investors, and getting noticed by companies recruiting. A good resume is a great foundation, and there is exponential value in activities that help you get noticed, such as entering competitions, coordinating a challenge, and the serendipity of being at the right place at the right time.  

Genevieve Patterson is a computer vision PhD candidate at Dartmouth, and she helped organize our LDV Vision Summit challenges last year. She said, “I found a great job at Clarifai from meeting the founder at the last LDV Vision Summit -- I am very happy!”  

Andy Parsons, CTO of Kontor and LDV Vision Summit judge, wrote, “I’m excited to see what the teams come up with [at the next summit]. And [I am] giddy thinking about the ‘Visually Immersive’ applications that are just around the corner. I expect to see lots of these at the summit, but more importantly, I’m jazzed to meet the people behind the code, creativity, and business ideas that are bringing them into our homes and pockets.”

Ankit Sharma is a computer vision graduate from the University of Florida. He said, “I am very happy I competed in the LDV Vision Summit Computer Vision Challenges 2014. I am actually collaborating with one contact from the summit on a shoe recognition app. I am speaking with another person from the summit to collaborate on new business leveraging image processing!”

Serge Belongie, professor at Cornell Tech and advisor to startups, recently wrote in Gearing up for the next LDV Vision Summit, “Recent years have seen a surge of startup companies using Computer Vision (Clarifai, Mapillary, Magic Leap, DeepMind, Jetpac) and a burst of interest on the part of established companies (Facebook, Google, Dropbox, Yahoo!) to recruit Computer Vision talent. The time is ripe to create opportunities to bring together the key players – the brilliant students and recent grads, investors, visionaries and practitioners – that fuel this emerging startup scene.”

Jan Erik Solem sold his last company Polar Rose to Apple. He founded his new company Mapillary, which recently announced $1.5M in funding led by Sequoia, and he presented at our last summit. He also wrote about his excitement by saying, “The result of mixing these people in one single event was amazing and resulted in investments, job offers, and lots of new connections.”

Our next LDV Vision Summit on May 24 & 25 in NYC will included over 80 international speakers with the purpose of exploring, understanding, and shaping the future of imaging and video in human communication. We will showcase the best startups and computer vision experts who compete in one of the following two competitions.

There is a traditional startup competition for any startup working with photos and videos who have raised less than $1.5M in funding.

There are also the exciting Entrepreneurial Computer Vision Challenges. This is the first competition ever to combine computer vision research challenges along with aspects of a traditional hackathon with APIs and SDKs. We are calling it the #LDVvisionHack, and competitors have two months to create solutions that impress their colleagues, companies that are recruiting, investors, and the judges. We hope that all levels of computer vision experts and enthusiasts will decide to compete. All finalists will receive remote and in-person coaching by Evan Nisselson, Serge Belongie, Jan Erik Solem, Rebecca Paoletti, Andy Parsons, and others.

LDV Vision Summit Judges 2016:

Josh Elman, Greylock

Jessi Hempel, Wired, Senior Writer

David Galvin, IBM Ventures, Watson Ecoystem

Evan Nisselson, LDV Capital, Partner

Jack Levin, CEO Ambrella

Jason Rosenthal, Lytro, CEO

Barin Nahvi Rovzar, Hearst, Exec. Dir., R&D & Strategy

Steve Schlafman, Principal, RRE Ventures

Alex Iskold, Managing Director, Techstars

Taylor Davidson, Unstructured Ventures

Justin Mitchell, Founding Partner, A# Capital

Serge Belongie - Professor, Cornell Tech, Computer Vision

Howard Morgan, First Round, Partner & Co-Founder

Gaile Gordon, Enlighted, Sr. Director, Technology

Devi Parikh, Virginia Tech, Assist. Professor, Computer Vision & AI

Jan Erik Solem -  Mapillary, CEO

Larry Zitnick, Facebook, AI Research, Research Lead

Tamara Berg, UNC, Chapel Hill, Assist. Professor, Computer Vision

Ramesh Jain - Professor, U. California, Irvine, Co-Founder Krumbs

Nikhil Rasiwasia, Principal Scientist, Snapdeal

LDV Vision Summit Judges 2015:
Peter Welinder, Dropbox, Engineer. Sold Anchovi Labs > Dropbox

Julian Green, Google, Group PM Mobile Vision. Sold Jetpac > Google

David Beisel, NextView Partners, Co-Founder & Partner

Andrew Cleland, Comcast Ventures, Managing Director

Kristen Grauma, U. of Texas at Austin, Associate Professor

Fran Hauser, Rothenberg Ventures, Partner

Pete Warden, Google, Engineer, Sold Jetpac > Google

Andrea Frome, Google, Brain Group. Computer Vision, Machine Learning

Serge Belongie, Cornell NYC Tech, Professor, Computer Vision

Moshe Bercovich, Shutterfly, GM Israel, Sold Photoccino > Shutterfly

Alejandro Jaimes, Yahoo, Dir. Research/Video Product

Raquel Urtasun, University of Toronto, Assistant Professor

Jan Erik Solem, Mapillary, Co-Founder & CEO, Sold Polar Rose > Apple

We look forward to seeing you compete in the competitions and join us at the next LDV Vision Summit! The deadline to enter the competitions is April 11, 2016.

 

No More Wheel Of Fortune Spinning My Camera Roll! Forevery Automatically Keywords My Photos

Every day exponentially more images and video content is created, shared and stored by thousands of major media companies, hundreds of thousands of professional photographers and billions of people creating content with their smart phones. 

The problem of finding that right image at the right time to sell, publish or share is exponentially increasing. 

LDV Capital and I are proud investors in the Clarifai team and they are solving a major problem of keywording, searching and organizing visual content for creators. Clarifai’s deep learning artificial intelligence based recognition platform already solves problems for major businesses and they have launched a solution for the photos in your smartphone camera roll. Forevery is a mobile app which can magically and automatically tag every photo in your smartphone camera roll with over 11,000 relevant concepts like things, ideas, feelings, people, and places.   

Professional photographer and director Doug Menuez says “As a photographer I am focused on making great images that communicate to others what I see. But I spend too much time searching for the right image on my phone when I need it to send to clients, show an editor or publish it in my feeds. A major challenge is manually keywording these images so they are easy to find. I would love a solution to expedite this part of my life. Clarifai’s new app Forevery is off to a great start at solving this and their software continues to learn automatically."

I have been a photographer since 13 years old with my Nikon FM, F and Rollieflex and then I replaced my analog cameras with my first cameraphone in 2013.  Since then, I have often been challenged to find images quickly and this is exponentially harder every day with over 10,000 images in my phone’s camera roll. It seems that most all my stories over a meal, coffee or drinks with people end up surfacing at least one moment where I say. “Wait… let me show you a photo that will visualize my story or let me find that photo of when we were skiing together in Europe.”

The other day my sister asked me if I had any recent photos of her and I said, “sure, let me find them.” Instead of opening my phone camera roll and flipping through thousands of photos with my finger. I opened the Forevery app which is synced with my iPhone camera roll. I searched for her name and keyword “smile” because I am sure she would prefer photos when she is smiling but I had never tagged any photos with the word “smile” - Forevery just knows and keywords automatically.   30 photos appeared magically after clicking search.  I quickly selected 10 images to share to my sister via the Forevery app. This took less than a minute but historically would have taken a hours. (I had tagged her face in images earlier so the app already knew which photos she was in.)

Clarifai understands people, places, things, and times in your photos as well as finds emotions like “love” or “happiness” and concepts like “adventure” or “celebration.”  

Ron Haviv, Co-Founder and Photographer of VII Photo, says “Forevery is an efficient, elegant and fast way to interact with my work on the phone.”

Our lives are more than 90% based in visual conclusions. "Finding that right image in my iphone camera roll typically takes a long time and I am always looking for a more efficient method to find photos for sharing and publishing. I am impressed with how Forevery is auto keywording my images and it will be even more valuable as the technology becomes even more accurate as it continues to learn," says professional photographer Robert Wright.

People ask each other all of the time - How was your trip? How was your party? What did you do last weekend? I think many of us and especially myself respond by saying - want to see a picture?

Before the Forevery app, I would open my camera roll application and rapidly scroll tiny thumbnail photos with my finger and it felt like I was spinning the Wheel Of Fortune. Hope the wheel ends on the Jackpot but typically, back, and forth, spin, spin forward, spin, spin, spin backward, spin, spin, spin, and spinning through thousands of photos.

Then I find an image that I believe is from the rough time period of when I think that photo occurred that I wish to find, squint at the thumbs, open one and say nope that is not it, then again and minutes later finally and hopefully find that image to share before my friend gets bored of sitting there. Then I start talking to make sure the silence is not uncomfortable while I continue scrolling and finally I find the image to show. 

Now I can now add descriptive keywords to search which I think occurred around an image in order to have the Forevery app deliver a smaller set of images depending on how good I am at describing the image, like keyword “Zermatt” “Snow” which I never labeled in my images but Forevery shows only photos with the Zermatt and Snow.

National Geographic Photographer and Co-Founder VII Photo, John Stanmeyer says "Forevery is powerful. Very powerful for organizing and prioritizing images, especially when there are thousands photographs stored on the camera that ironically makes phone calls -- and I have thousands of images on the iPhone." 

Now I open Forevery almost daily because most all my days include visual storytelling.  Finally wasted time spinning through tiny thumbnails on my phone is a thing of the past.

Empowering Our Visual Technology Community: Power to the People!

One of the most difficult parts of our days involves filtering relevant content from irrelevant content. Every day we are inundated with articles, product announcements and news. Everyone has different ideas about what information is important to them and this changes over time.

People filter information from many sources—print and online publications likeThe New York Times, Le Monde, The Wall Street Journal, Corriere della Sera, Wired, The Daily Telegraph,online only sources like Quartz, Huffington Post, Verge and Re/Code and social communities such as Twitter, StackOverflow and Facebook.

Sometimes we want to know the top stories relevant to our business, hobbies or passions. Other times we want to discover intriguing new stories. Filtering relevant content daily is a gigantic challenge, one that gets harder every second of every day.

This inspired us to create a solution and collaborate with Serge Belongie, Computer Vision Professor at Cornell Tech; Andrew Rabinovich, Senior Engineer at Magic Leap; Rebecca Paoletti, CEO at Cake-Works; Pete Warden, Staff Research Engineer, Google; Matthew Zeiller, CEO at Clarifai; and Jan Erik Solem, CEO at Mapillary; and would love to invite more of you to collaborate.  

We built LDV Vision News to empower our community by highlighting and tracking the smartest people doing the most interesting projects across our global Visual Technology ecosystem.  This is a very basic first version. Moving forward, we are excited about the opportunity of collaborating with you and others to build a vibrant and valuable tool to help each other succeed.

Visual Technology is any technology that creates, analyzes and manages visual data for consumers or businesses. From technology empowering photography, videography, medical imaging, analytics, robotics, satellite imaging, augmented reality, virtual reality, autonomous cars, media and entertainment, gesture recognition, search, advertising, cameras, e-commerce, sentiment analysis, and much more. Each of the involved sectors relies more and more on different forms of machine intelligence, including computer vision, deep learning and artificial intelligence.

Every day I speak with many people across our ecosystem who unanimously agree that no online tool exists to help us filter the most interesting Visual Technology projects and people. We have different conferences, including the annual CVPR, LDV Vision Summit, Video Focused Conferences and Deep Learning Summits, that highlight people and projects, but they only occur once a year. All agree it would be great to have a real-time online resource to publish, share and discuss what is most interesting every day.

Serge Belongie, Professor, Computer Vision, Cornell Tech: “Look forward to having LDV Vision News become a valuable resource for my students, alumni and community to have space to discuss the news in our community. Hope you all will collaborate with us.”

Andrew Rabinovich, Principal Engineer, Magic Leap: “This is a great idea. A central place for vision tech, evaluated by experts/novices and ranked by importance. This will be very useful, and I look forward to collaborating with our community.”

Donna Romer, CEO of Yarn, emailed immediately after she joined: “This is a great resource. There just is nothing like this—I keep a personal list that is really unproductive after a while because there is no community. I have signed up to collaborate and am excited to submit interesting stories I see.”

This initial version of LDV Vision News is extremely basic. It allows anyone to join, submit interesting stories and projects, comment, upvote and share to their social graph. Next we will evolve the platform to also filter signals around the people working in our ecosystem.

Simon Osindero: A.I. Expert: “The LDV Vision News site looks great—nice initiative. I just signed up and look forward to contributing.”

We believe our two goals of filtering the most interesting projects and people will become extremely valuable for recruiting, finding interesting new startups, creating a forum to highlight unknown initiatives, finding companies to invite to our LDV Vision Summit and, hopefully, providing additional benefits that we have not even realized yet.

Every week members receive an email newsletter with the 10 interesting stories posted to LDV Vision News.

“Excited to contribute and collaborate with our community in building a valuable resource,” says Pete Warden, Staff Research Engineer, Google.

Rebecca Paoletti, CEO of Cake-Works: “We are always working on and looking for new ways to help our customers be informed and up-to-date. Our weekly Worth Reading newsletter is one tool and we are excited to collaborate with the LDV Vision News community.”

Jan Erik Solem, CEO of Mapillary: "I’m contributing imaging links to LDV Vision News regularly and it is becoming a great resource for news and inspiration. I love the weekly digest email, it is a great curated summary of what’s going on."

I already found an interesting company that I didn’t know about via LDV Vision News and have asked them if they would speak at our next LDV Vision Summit. 

Prior to LDV Vision News my morning routine was to read on my iPad while having coffee. When I found interesting stories and projects I wanted to review later or share with others in our ecosystem, I would—depending on the urgency—email them to myself and/or share them on Twitter, Facebook and LinkedIn.  

Ophir Tanz, CEO of GumGum: “LDV Vision News looks terrific. I've joined, shared with our team and we’ve started submitting stories.”

Carter Maslan, CEO of Camio: “Nothing better than curated vision stories from the peers you trust. Excited to contribute to this.”

After launching LDV Vision News, my daily routine has become more efficient. Throughout the day I find interesting projects, submit them to LDV Vision News and then share to my social graph from LDV Vision News. Now I have all my visual technology focused information in one place and learn from others in our community.

Matthew Zeiller, CEO, Clarifai:  “LDV Vision News looks great. It is a valuable constant stream of relevant information for all of us in the visual technology ecosystem from research to product. Sharing, reading and discussing this info in one place will save us time and help us stay keep up to date.   I am excited to be involved from launch and help it grow!"

"This online community is, in fact, a place that brings together *multiple* communities -- a combination of business, tech industry and academia -- with a potential to identify and highlight the most impactful people & projects in the visual tech ecosystem. It will become an important resource for anyone working in this area." Mor Naaman, Associate Professor, Jacobs Institute, Cornell Tech

We hope you will see value in LDV Vision News, and we look forward to having you contribute designs, features and code to help us make this an extremely valuable resource for our community.  

Hope you join our LDV Vision News community and look forward to seeing the people, stories and news you believe are important.

Power to the People!

 

Visual Technologies Are Revolutionizing Humanity And Business

Evan Nisselson, LDV Capital ©Robert Wright/LDV Vision Summit

Evan Nisselson, LDV Capital ©Robert Wright/LDV Vision Summit

Join us at the next LDV Vision Summit May 24 & 25, 2016 - Discount tickets available!
This article is an excerpt from our LDV Vision Book 2015  

Visual Technologies are revolutionizing humanity and business around the world like never before. People spend an unbelievable majority of their time every single day with technologies that help them make sense of what’s going on around them, capture the moments that matter to them, and figure out how they get from point A to point B. What’s unique now in the world of technology and the Internet is the evolution of a higher band- width, Cloud computing, real time data, and mobile devices that are allowing people to capture exponentially more visual content. Whether it be images or video—this is drastically changing how people communicate with each other. Instead of going to the dentist and texting someone that you just went to the dentist, you send a picture of yourself at the dentist. While on vacation, you don’t just write an email telling your friends about your vacation. You post an image of the beach on Facebook or post what you ate that night on Instagram. You capture video of a beautiful event, like a wedding or a birth- day, to preserve the moment so you can relive it later or share it with others.

Now in its second year, the annual LDV Vision Summit is all about gathering the key people in our ecosystem at one event to hear their wisdom and advice on how humanity and business will be impacted by new technologies and companies in the visual imaging world. We gather the five different categories of people—from startups, investors, computer vision and AI experts, media and brand execs, and creators—to come together to network, do deals, explore and inspire, and find co-founders to build businesses. It’s all about empowering the people building the visual revolution.

The ideas and lessons from the LDV Vision Summit live on long after the event concludes via different mediums as we know everyone has different content viewing tastes. We capture videos from the event which are available online and we are also very happy to be putting together this second LDV Vision book.

What is visual technology? Visual technology is any technology that captures, organizes, filters, learns from, or distributes visual content either for consumers or businesses. It’s a horizontal focus across all business sec- tors and humanity. How will these technologies impact everybody’s lives, in work and in play? If you look across different industries, these examples cover a wide range of sectors that visual technology impacts, from personal to business, and sub-sectors of medical, financial, sentiment analysis in advertising, in publishing, in shopping, autonomous cars, and so on. There are so many examples of visual technology affecting our world today, you may not even realize it. Facebook couldn’t exist without images and video. There’s been a lot of discussion recently about self-driving cars. Self-driving cars would not exist without many cameras in and around the car telling us where the pedestrians are, where other cars are, where the curb is, and how fast the car should be going. That’s an unbelievable evolution in leveraging visual content, not only to communicate visually but to help us be safer in day-to-day activities. The mission of the LDV Vision Summit is to bring together—in a unique way—the men and women who work in this world and who together represent the whole ecosystem of visual imaging, but who don’t typically sit in the same room.

One of the things that’s extremely rewarding and unique about our Summit is that there’s no other event that brings together those five categories of people to one place. It’s challenging because they all have different needs, but what’s unique is the serendipity of everybody coming to one place. Over the last two events we’ve had, there have been at least 10 to 15 companies that have raised significant funding after being highlighted at our Summit. There are several people that have been recruited by technology companies after meeting at the Summit. There are several technology service deals with media companies that have been inked because of the Summit. Some people have joined together to build new companies and also a multitude of investment opportunities. We’ve invested in a couple from the Summit, and we look forward to investing in more. That core of serendipity of opportunity is the foundation for why we put this together, in addition to inspirational and educational benefits.

I’ve been building businesses and visual technologies since the mid ’90s, and prior to that I was a professional photographer and a photo agent. What’s fascinating is that in the ‘90s, because of the bandwidth and other bottlenecks, it was harder. There were no phones with high-quality cameras. No one was walking around making photos with camera phones or iPads. It was a very slow evolution. It continued to be slow until the mid 2000s, and I think in the last five years the pace started to move faster, mainly because of the high growth of adoption of smartphones with cameras and smaller devices using hands-free cameras. Things have exploded. Take GoPro, for example.

Everybody who is doing extreme sports is using a GoPro. However, with this exponential growth of content, it’s actually making it harder and harder to decipher and discover the signal through the noise. I’d say 99.9% of the content that’s created and shared is not high-signal or high-quality content. One significant challenge is to figure out the single video or single video frame that’s contextually relevant to me and at the contextually right time, so that I can enjoy watching it on any device. In the last five years, the industry has grown exponentially following the evolution and growth of mobile devices. I think we’re just at the beginning of that growth. We cannot even imagine how many more ways there will be for us to capture, create, share, and visually communicate with content.

We talk about exponential growth of capturing and of creating but the real core that is going to make this exponential growth of visual content valuable is machine intelligence, computer vision, deep learning, and artificial intelligence, which can now contextually filter what’s relevant for us at the right time. As we create more and more visual content, we’re becoming overwhelmed with everything available to us. How can we make sense of it all? Because of Cloud computing and advancements in machine intelligence, we are seeing tremendous progress and opportunity. Facebook, Twitter, Google, Apple, Dropbox, Qualcomm, Intel, Microsoft, Shutterfly, and other major players are building up huge departments for machine learning by acquiring companies and teams in that space. They realize the need for filtering the signal through the noise.

All technology impacts lives for good and for bad. It’s all a matter of perspective. 20 years ago I was working at an early-stage startup called @Home Network, and I was working around the clock in front of my computer buying all these devices and saw very early how it was going to impact people’s lives. When a couple of us created the first broadband photo community in 1997 at @Home Network called Making Pictures as a $3 million joint venture with Intel, everybody thought I was crazy. Everybody said, “I don’t want to make digital photos. That’s a horrible thing. I want negatives, and I want to go to the darkroom.” The point of whether or not a technology is good or bad evolves with our evolution of technology and humanity. I’d rather spend more time on creativity and helping entrepreneurs build successful businesses than doing a lot of the menial stuff that doesn’t intrigue me, entertain me, or is in my mind a waste of time. One person’s favorite service is another person’s bane of their existence.

A big theme of this year’s Summit was satellite imaging, which is some- thing that we’ll talk about a lot here. More and more satellites are being put in the air and more and more have high-definition cameras. Not only will this impact businesses and lead to the creation of new billion-dollar businesses, but satellite imaging is going to be able to move financial markets as well. Now we can actually track how many cars are in a parking lot at Walmart or Home Depot and compare the two in real time rather than months later after manual research. We can identify the abundance or lack of oil reserves from the sky. We can monitor traffic and track trends for busy roadways. We can track how popular or how active shipping ports are. We can count how many containers are on a ship, how many cars are in the port, and how fast they’re moving in and out of the shipping ports.

To take it to the next level: Sure, we have camera phones and we have flying cameras attached to drones—but when will we have satellite selfies? Let’s back up a second. You can take a selfie with your camera phone. You can take a selfie with a selfie-stick. You can take a drone selfie, which is called a “delfie.” There’s a “jelfie,” which is a jumping selfie. Soon there’s going to be a satellite selfie, where you’ll be able to touch your smartphone, look up into the sky and a satellite will make a picture for you. Obviously there are technical hurdles that we have to get through, but it will happen in our lifetime. You’ll be able to tap your smartphone, grab your friend or significant other, look up, and the satellite will take a picture of you. Maybe soon you won’t even have to carry your camera phone around or take any pictures because you can say, “I want this picture” and in two seconds it will be captured by a security camera, a drone, or a satellite camera.

Obviously, there are bandwidth issues from the satellite. Obviously, there are delays from the satellite, and the quality is not yet good enough to see the freckles on your face—yet. However, the military are actually using satellite cameras that can read a paper in your hand. This technology is happening, but there are still a lot of questions and I don’t have all the answers. That’s why we gather the experts at the annual Vision Summit and in this book, so we can help inspire others about how visual technologies are impacting our lives for better or for worse.

Machine intelligence has great impacts within the medical world, too. Why is it that we go to the doctor for an x-ray and the radiologist only looks at three x-rays from that one visit? Imagine if they could look at the x-rays from our whole life and see trends? Even better, why don’t they look at millions of anonymous x-rays to better understand whether what we have is something serious? What about skin cancer? There are several companies that are starting to create cameras that can photograph moles or different pieces of your skin so that you actually can send that to a doctor in advance of having a doctor’s appointment. What about mammography? We can look at hundreds of mammogram photos to understand trends in these medical scans, helping radiologists be exponentially more efficient at seeing potentially problematic patterns. That can’t be done unless we’re leveraging computer vision and machine intelligence.

This potential for analysis of visual content—contextualizing, organizing, using it to predict and identify trends—is a maturation of visual technology that allows us to transition these tools from industrial applications to mass consumer use. This year, we saw tools for advertisers to analyze consumer content via facial expressions through sentiment analysis thanks to companies like Emotient and Affectiva. We learned about face recognition doorbells, exciting developments in 3D printing technology—like a custom-fitted insole—and Microsoft’s hologram headsets. There’s the interactive video experience from WIREWAX and Sphericam’s 360 degree video camera that’s taking adventure photography to the next level.

The purpose of this book is to capture the most important ideas from the Summit so they are not lost once the event is over, and to explore the latest thinking in everything from computer vision to AI to deep learning to augmented reality. During the 2015 LDV Vision Summit, we assembled 80 speakers from many countries around the world to take part in this two-day meeting of the minds in New York City. This written document of the sessions, presentations, and conversations that took place there is our attempt to preserve the knowledge and wisdom from those we brought together so that others can learn, innovate, and create. The book is broken up into themed sections, and what’s really fascinating and unique about the group of people assembled here is the wide range of visual technologies that are represented across such a broad spectrum. I think that’s the most exciting, most challenging, and will deliver exponential returns for everyone involved and who reads this book—hopefully for happiness, health, and financial benefit.

I have always been fascinated by and passionate about visual computing and visual communications, since getting my first Nikon FM camera at 13 years old. I’ve been a professional photographer, 18 years of building businesses within this visual technologies sector, and an investor for more than three years at LDV Capital. What excites me even more is that I don’t have all the answers. I never have all the answers. I love surrounding myself with others who might have the answers and collaborating with them to figure things out, to be inspired, to learn.

That’s the core of why several of us have worked together to bring the LDV Vision Summit to life, so that it can empower the world to learn, inspire, and create. Hopefully each and every one of you will have your own benefits, whether or not you’re an expert, whether or not you love photography or video, whether or not you’re thinking about being an entrepreneur or you’re a computer vision expert and you want to figure out how to build a business rather than studying how to write grants for the rest of your life. There’s a piece of the puzzle for everybody here, and I hope you enjoy the wisdom of all of our speakers in this book. 

Discount tickets for LDV Vision Summit May 24 & 25, 2016 available!
This article is an excerpt from our LDV Vision Book 2015