Sea Machines Analyzes Diverse Visual Data To Deliver Safe Autonomous Boats

As we gear up for our 6th annual LDV Vision Summit, we’re highlighting some speakers with in-depth interviews. Check out the full roster of our speakers here. Regular Price tickets still available for our LDV Vision Summit on May 22 & 23, 2019 in NYC at the SVA Theatre. 60 speakers in 40 sessions discuss how visual technologies are empowering and disrupting businesses.  Register for tickets now!

Fiona Hua, Lead Perception Research Scientist at Sea Machines Robotics (courtesy Fiona Hua)

Fiona Hua, Lead Perception Research Scientist at Sea Machines Robotics (courtesy Fiona Hua)

The fast clip of innovation these days often translates to huge, massive changes. As industries mature, even with leading edge technologies, changes come not in the form of exciting research but in tiny shifts. That was Fiona Hua’s experience as a research scientist, before joining Sea Machines, a marine- and maritime-focused technology company that recently closed a $10 million Series A.  (LDV Capital invested in their seed and Series A alongside other top-tier funds including Accomplice, Eniac, Launch Capital, Geekdom, Toyota AI and others.)

“I have worked in biometrics for more than 10 years, developing algorithms for facial, iris and fingerprint recognition with products for millions of users all over the world from bank login system, law enforcement to border control applications. I have seen how new technologies helped boosting the system performance from good to superb, which make the products much more reliable for daily use. But this area has been well researched and the accuracy could be 99.9999%. People now in this area really are working on improving that small tail of accuracy.” Interesting, but not interesting enough — Fiona knew she was ready for a new challenge.

Enter the prospect of autonomous vessels. “I am really interested in the autonomous industry, because it's happening, and it's changing the world. Comparing biometrics to autonomous perception work, the data is different, the operational situation is different, the problem is different, but the way to solve problems is similar and the fundamental technology is almost the same.”

In this article, Fiona talks about leading the perception team at Boston-based Sea Machines, one of our portfolio companies. Read on to learn about her journey from biometrics to autonomous vessels, how Sea Machines differentiates itself in the market, why she’s excited for our LDV Vision Summit later this month, and more.

FROM SELF-DRIVING CARS TO SELF-DRIVING VESSELS

At first glance, biometrics might seem to have little in common with the operation of boats. That’s what Fiona thought, until she got the phone call about Sea Machines.

After initial conversations, Fiona realized the as-yet-unfamiliar field included similar technologies but with unfamiliar data: how to collect and organize data, how to deal with incomplete data, and how to optimize your system to learn maximum information from the data.

“I started to think, wow this is a huge domain and a very important industry. Autonomous shipping is the future of the maritime industry. Similar to the autonomous car industry, autonomous vessels could affect everyone’s daily life. That's why I got into Sea Machines,” says Fiona. She immediately recognized that she could leverage her technical skills and experiences in a new way.

HOW SEA MACHINES’ PERCEPTION TEAM IS WORKING TO MAKE BOATS THINK

Sea Machines Robotics specializes in advanced control technology for workboats and other commercial surface vessels. (courtesy Sea Machines)

Sea Machines Robotics specializes in advanced control technology for workboats and other commercial surface vessels. (courtesy Sea Machines)

As the perception team lead at Sea Machines, Fiona is responsible for guiding the perception research and collaborating with other teams with the same goal in mind: “To make the vessels see more, sense more, and think more. We want to help the vessel to understand the situation around it, like locations of other vessels and markers, the sea status, the weather, etc. As long as the vessel knows what surround it, we can teach the vessel how to react to the environment  appropriately.”

“We want to help the vessel to understand the situation around it, like locations of other vessels and markers, the sea status, the weather, etc. As long as the vessel knows what surround it, we can teach the vessel how to react to the environment  appropriately.”

To help vessels sense their surrounding environment, the perception team uses multiple onboard sensors to gather as much info as possible: Radar, LiDar, AIS, GPS, visible cameras and thermal cameras. From there, the team employs state-of-the-art technologies including deep learning, computer vision, image processing, and sensor fusion to process these different sensor data and translate them into information a human can easily understand.

It’s clear that the self-driving car industry is pioneering the algorithms and frameworks to provide autonomous technologies, says Fiona. But while the maritime use case is not quite at the maturity as the automotive industry, vessels have an advantage when it comes to sensors.

“It’s debatable if a car should have additional sensors other than cameras due to the cost,” says Fiona. On the contrary, cost is less of a concern for maritime use because most engine-powered ships may have already installed expensive sensors like Radar, AIS and GPS systems.

“Adding cameras, like wide dynamic range HD cameras and thermal cameras, is a very welcome thing for the captain/driver since they may “see” more from cameras, which is especially a huge convenience for big container ships. Adding LiDar is not a big cost when safety is the big concern for commercial vessels. All these sensor data are great resources that ultimately could make vessels safer, more efficient, and less expensive.”

SMR Boat.jpg

This doesn’t mean it’s easier to develop technology for autonomous vessels, Fiona explains. “Sea surface vessels have much more complexity compared to cars. These vessels have different shapes, types, sizes, and they can be seen in different angles, speed, and distance. The weather conditions and sea states could have huge extreme conditions, which is hard to predict. The behaviors and controls of vessels for the collision avoidance are quite different for a thousand-foot cargo ship than a small buoy. All these variations add complications to our vision tasks. By looking at a busy harbor with many buoys, small crossing boats, and several thousand-feet container ships on the background of constructions, it is easy to understand the difficulties for perception work.”

LEADING THE PERCEPTION TEAM AT SEA MACHINES

So what’s a typical day for Fiona and the perception team? As might be expected, it’s a combination of a lot of research and time in the field.

“Thirty percent of the time we are talking about new ideas, new solutions, because I want to help my team finding the right direction and to optimize our work. Another forty percent of the time I'm actually doing the real work, the research, looking at the papers, coding, understanding the data.”

That doesn’t include the time spent collaborating with other colleagues around the world, or the data management itself.

“We are working closely with our Hamburg (Germany) team for integrating our perception system on Maersk’s thousand-feet container ship; we are sending data to an annotation company for labeling the ships in images; we go by boat with the testing team for understanding the real operational situation and what the harbor looks like.”

There’s also the experience of being out on the vessels themselves that’s unique to this role, says Fiona.”I like to find some time go with our lead testing captain. I want to know how she uses our developed autonomy control system, how to test and what the problems are. And I also like to know her experiences and future perspective of maritime industry.”

Being on the boat gives Fiona a chance to experience the traffic a vessel encounters — of many different types and sizes.

“Our testing team drives boats out in the harbor testing every day. All the way from the docker to the testing area, there are different vessels driving by, canoes, lobster boats, cruises, or even cargo ships. When you are really out there, you will learn how busy it can be in the harbor. It's much busier than you expect. If you have a chance to drive even further offshore, it's less busy than you expect.”

The perception team’s product has not integrated into the self-driving boat yet, but it will be testing on boat soon.

BEYOND TECH: SEA MACHINES STANDS OUT WITH DOMAIN EXPERTISE

There are several other companies doing similar things with autonomous maritime vessels, but according to investors, says Fiona, “Sea Machines has the most capability to bring this mission to the real product.”

What makes Sea Machines stand out is not only its technology and its big vision for the future, says Fiona. It’s also the deep understanding of the maritime industry from its leadership team. Sea Machines’ CEO Michael Johnson, COO Jim Daly and its Boston-based engineers and testing captain all have deep experience in the maritime industry. “They know deeply about the industry, where the needs of change come from, and what is possible to be changed. Everything that we are talking about, working on and aiming for are key parts for the future of the maritime industry.”

This specific expertise doesn’t limit their vision of the future. It makes them humble to absorb new concepts and open to new technology. They believe the vision, the autonomy and the combination of other new technologies could eventually lead the maritime industry into a new generation with smart, safer, efficient vessels.

Sea Machines’ vision, the autonomy and the combination of other new technologies could eventually lead the maritime industry into a new generation with smart, safer, efficient vessels.

LDV VISION SUMMIT

Fiona is slated to deliver a talk at LDV Vision Summit on the challenges, opportunities and vision of leveraging perception to deliver maritime autonomy. Beyond the talk, she looks forward to soaking up learnings from the other attendees.

“I really want to meet people from different applications, different industries, but applying similar vision technologies. Or maybe even different technologies and learn why they came up with their idea, what problem they're working on and what their vision of the future is.”

If you’re building a unique visual tech company, we would love for you to join us. At LDV Capital, we’re focused on investing in deep technical people building visual technology businesses. Through our annual LDV Vision Summit and monthly community dinners, we bring together top technologists, researchers, startups, media/brand executives, creators and investors with the purpose of exploring how visual technologies leveraging computer vision, machine learning and artificial intelligence are revolutionizing how humans communicate and do business.

Building a Global Marketplace to Commoditize AI, Synthetic Data is Key

Yashar Behzadi, CEO of Neuromation at LDV Vision Summit 2018 ©Robert Wright

Yashar Behzadi, CEO of Neuromation at LDV Vision Summit 2018 ©Robert Wright

Regular tickets available for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  60 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

Yashar Behzadi is the CEO of Neuromation, he is an experienced entrepreneur who has built transformative businesses in the AI, medical technology, and IoT space. At our 2018 LDV Vision Summit, Yashar spoke about how Neuromation aims to democratize artificial intelligence through the use of synthetic data and distributed computing to dramatically reduce the cost of development. Through its token-based global marketplace, Neuromation connects AI talent, data providers, and customers to enable the development of novel AI solutions. He shared how their marketplace and token functions, as well as synthetic data, will impact our industry today and in the future.

It's great to be here. As mentioned, I'm Yashar Behzadi, the CEO of Neuromation. This should be a familiar image for everybody. We know that major platform companies maintain their competitive advantage by building walls around their ecosystems and data moats to protect themselves, especially with data-hungry deep learning applications. Neuromation's vision is to marketize AI, level the playing field, and enable everybody to contribute to and benefit from AI.

So how do we do that? Well, we've raised about $50 million to bring our vision to the world, and it starts with first building a global marketplace. Some of the trends that drive us to think about a marketplace for AI is the commoditization of a lot of these algorithms and applications. The freeing of data as many people are starting to want to share their data and bringing this data to bear or monetize their data assets. I think the most powerful kind of trend here is that software engineers are becoming AI practitioners. Right? Now you're looking at a pool of people who are orders of magnitude bigger than the current AI practitioner space.

LDV Vision Summit 2018 ©Robert Wright

LDV Vision Summit 2018 ©Robert Wright

What happens when millions of people from across the world - in centers of excellence that we're finding in Eastern Europe, in China, in all these different places - are developing these skills and now have access to data as well as have access to a global demand to the enterprise. That's where we're building the nexus is this overall marketplace, to connect the dots.

Part of this will be the data side of things, and we've developed a number of tools to make it simpler to build AI applications, just out of pragmatism. It's hard to get data sometimes for some of these applications. I'll talk about this a little bit more in detail. So where there's open source data, proprietary data, individuals' data, or synthetic data, we allow that to be in our platform, integrated in a simple way, and connected with the overall model development system and the AI developers to streamline the overall development.

All of this is enacted through a token exchange, so we did an ICO, as well, and built a virtual economy around this. This allows for a number of key advantages. One is very granular control of individuals' data and ability to share and withdraw access controls. And global microtransactions are a very big part of this. You can have a model. You can be a grad student. You can put your AI model on our platform. One person can call from the other side of the world and get paid on that one transaction, very seamlessly, very easy, and then you can use that to buy data assets or other things on the platform, so it creates a very simple and easy kind of method for transactions.


It’s automated. It’s cheap. And probably the most powerful element of synthetic data is that its combinatoric power adds robustness to any application.


Synthetic data, as I mentioned, is one of the key technologies we're building to enable this. We have some other things in the lab, as well. And synthetic data is a very interesting place to start because it breaks down the barriers with these data moats and allows small companies to get the data they need for specific applications to then compete and win against the larger platform companies. And it allows for 100% pixel perfect labels. It's automated. It's cheap, and probably the most powerful element of synthetic data that we heard about yesterday, as well, is that the combinatoric power adds robustness to any application.

By being able to take an object, multiplying the number of objects by the number of environments, by the number of different camera attributes, to a number of landscapes and things that you may want to embed that object in, and combinatorially, you can create billions of images that supplement and make your applications even more robust or address issues of bias and other things we've talked about earlier today.

We find that there's kind of a natural trajectory for using synthetic data. The first place to use it is where the models are very simple and the object models are very simple. So in a retail application, we're working with a large retailer, they have 200,000 SKUs, they're rapidly changing, they have various different shelf arrangements ... It's kind of impractical and very costly to generate the data necessary to do the traditional deep-learning models.

But with synthetic data, we can generate all the SKUs because it's easy to create a realistic model of a particular consumer good. It's well-described. And we can build this application and have it perform on par with using traditional data methods. The next kind of evolution of synthetic data will be around simulation environments. I think progression is more generalized models in which you look at kind of how well your model does in classifying specific objects, and then have the synthetic data automatically generate new data to make it more robust against with those particular edge cases.

LDV Vision Summit 2018 ©Robert Wright

LDV Vision Summit 2018 ©Robert Wright

Broad set of applications, as I mentioned, across a variety of fields and a lot of these we're developing in-house currently or in partnership with partners. The first task is enabling applications to be done faster and cheaper than traditional methods. The second one is in applications in which the event that you're trying to estimate is actually very rare. So it's actually hard and impractical to get the amount of data that you need, and the number of things in that category. The third is any data application that can benefit from the variety that's added by the combinatoric power of a synthetic data hardens it.

I'm very happy to be here, and this is a great conference. And if you guys want to talk more about contributing to our marketplace or benefiting from our marketplace, please talk to me. Thank you.

Don’t miss the opportunity to hear from more visual tech visionaries at our LDV Vision Summit 2019. Register now!

How are Brands Benefiting from AI & Computer Vision?

We are just two weeks away from our sixth annual LDV Vision Summit on May 22-23 in NYC. Our agenda is live with some phenomenal speakers and sessions to discuss cutting edge visual tech. Regular priced tickets are available through May 12 -  get your tickets now!

At the LDV Vision Summit 2018, Dave Gershgorn (Quartz) spoke with Ophir Tanz (GumGum), Erin Rech (Initiative), Jessica Criscione (Ogilvy & Mather), and Beth Rolfs (Publicis) about how brands can benefit from AI and computer vision.

Artificial intelligence is already an integral part of the digital marketing ecosystem as it exists today. Whether you’re an advertiser buying ads programmatically or a marketer using computer vision to assess the media value of your sponsorship, artificial intelligence has become a fact of life. The panelists shares the challenges and opportunities of these technologies impacting advertising and marketing today and what's on the horizon and in 5 years from now.

Our LDV Vision Summit 2019 will have many more great discussions by top experts in visual technologies. Don’t miss out, check out our growing speaker list and register now!

Visual Technologies is an Amazing Lens to Think Through the World

Nabeel at Spark in San Francisco.

Nabeel at Spark in San Francisco.

Nabeel Hyatt is a former founder, CEO, and now General Partner at Spark Capital. As a former engineer and designer, he has invested regularly at the forefront of visual technology companies. He has led rounds and served on the board of such breakouts as Cruise (sold to GM for over $1B), Capella Space, North (formerly Thalmic Labs), and Rylo.

Nabeel will be having a fireside chat with Evan on trends and investment opportunities in visual technologies at the 6th Annual LDV Vision Summit May 22 & 23. Regular tickets are available through May 12 , get yours now to come see Nabeel and +60 other top speakers discuss the cutting edge in visual tech.

In lead up to our Summit, Evan Nisselson, General Partner at LDV Capital asked Matt some questions about his experience investing in visual tech and what he is looking

Evan: You have impressive entrepreneurial experience as an engineer, designer, founder and CEO before becoming a General Partner at Spark Capital. What is your most important value add to help your portfolio companies succeed and why?

Cruise Co-Founders Dan Kan and Kyle Vogt with Nabeel after taking a test drive.

Cruise Co-Founders Dan Kan and Kyle Vogt with Nabeel after taking a test drive.

Nabeel: I've come to believe venture is a craft best practiced at the personal, not industrialized level. Every startup is a very unique journey, and every founder is in a specific place in their path. It's my job to adjust to the needs of the team instead of apply some playbook of advice & value. I happen to be a pretty right-brain/left-brain person, so in any given month I may be bouncing between product, metrics, people, culture, marketing, or fundraising. 

Evan: You have invested in several businesses with visual technologies at the core. Please give a couple of examples and how they are uniquely analyzing visual data.

Nabeel with North (formerly Thalmic Labs) Co-Founders Matthew Bailey, Aaron Grant and Stephen Lake.

Nabeel with North (formerly Thalmic Labs) Co-Founders Matthew Bailey, Aaron Grant and Stephen Lake.

Nabeel: Visual is an amazing lens to think through the world right now. The advances in computer vision the previous few years were one of the core reasons we believed it was finally the right time for autonomous driving when we led the Series A at Cruise. I have no doubt that technology is getting massively cheaper and will make it's way into many consumer hardware products in the next few years. At the same time we are always looking for new sensors, new visual data to evaluate the world around us, such as the Synthetic Aperture Radar (SAR) data that Capella Space is going to be providing very soon. That will be the first time a U.S.-based commercial company will offer that view of the world and can be massively enabling to the next generation of companies. 

Evan: Are you more optimistic on Augmented Reality, Mixed Reality, or Virtual Reality businesses and why?

Nabeel: Why the choice? Spark has made pretty deep investments on all sides here. In Virtual Reality we co-led the Series A at Oculus and I think their next product, the Quest, which comes out in May is the first consumer-ready VR product. I think it’s going to surprise some folks that had written off VR. 

On the Augmented Reality side we are investors in Niantic, it’s actually one of our largest investments, as well as North, and Control Labs. When we invested in Niantic a lot of the belief in the future of that company is not just their games, but in their ambitions in augmented reality. They are a powerhouse here, and we are pumped they are finally beginning to talk about their ambitions. 

Bijan and Nabeel

Bijan and Nabeel

But I’d say Augmented Reality is still a little further ways off, there are still some fundamental technical breakthroughs that are needed for it to be an everyday or even regularly used product. Thankfully there are companies like North and Control Labs that are really deep tech innovators who are making the breakthroughs necessary to bring AR to the masses. 

Evan: You frequently experience startup pitches. What is your one sentence advice to help entrepreneurs improve their odds for success.

Nabeel:
Less time on how big your market is, more time on how your product is undeniably amazing. And don’t forget to really show your passion as a founder. 

Check out the other phenomenal speakers and the agenda. Don’t wait, get your tickets now for our sixth annual LDV Vision Summit!

Computer vision is an immensely fertile area for entrepreneurs and investors!

Matt Turck, Partner at FirstMark Capital

Matt Turck, Partner at FirstMark Capital

Matt Turck is a Partner of FirstMark Capital. He invests across a broad range of early-stage enterprise and consumer startups. Prior to FirstMark, he was a Managing Director at Bloomberg Ventures, the investment and incubation arm of Bloomberg LP, which he helped start. Previously, Matt was the co-founder of TripleHop Technologies, a venture-backed enterprise search software startup that was acquired by Oracle.

Matt will be sharing his knowledge on trends and investment opportunities in visual technologies as a panelist and a Startup Competition judge at the 6th Annual LDV Vision Summit May 22 & 23. Regular tickets are available through May 12 , get yours now to come see Matt and +60 other top speakers discuss the cutting edge in visual tech.

In lead up to our Summit, Evan Nisselson, General Partner at LDV Capital asked Matt some questions about his experience investing in visual tech and what he is looking forward to at our Vision Summit...

Evan: Your entrepreneurial experience as co-founder of TripleHop which was acquired by Oracle, then Managing Director of Bloomberg Ventures and now managing director at FirstMark Capital.  Which aspects of your expertise do you believe helps you empower entrepreneurs to succeed
and why?

Matt:
It’s pretty important to know yourself as an investor, and know which areas you can have a differentiated point of view and have something meaningful to bring to the table beyond the money.

For me it’s pretty simple…I’ve spent most of my career in software, data, ML/AI and infrastructure, so for the most part I try to stick to those areas as an investor, with a little bit of more experimental frontier tech here and there.  Thankfully some of the most exciting things happening in tech right now are occurring in those domains.

By focusing on a few domains, you get a strong compounding effect over time where you accumulate a lot of relevant knowledge and build deep networks.  You see the movie a bunch of times, and that gives you a lot of context.

It’s also been very helpful to have been a founder and entrepreneur.   It’s harder to build empathy, and truly internalize how hard this stuff is, unless you have experienced it first-hand yourself.  Having been in the trenches helps you as an investor knowing when to be active and involved, and when to back off and just listen.

Finally, beyond what I can bring to the table as an individual, the entire FirstMark team has done a tremendous amount of work over the years to build our platform, which you can think of as post-investment support.   We work hard to connect portfolio companies to experts, customers and talent. It’s something we’re very proud of, which has delivered very clear benefits to the companies in the FirstMark family.

https___blogs-images.forbes.com_andrewweinreich_files_2017_12_mattturck-1200x750-1.jpg

Evan: FirstMark has invested in many visual technology businesses including Pinterest, which recently had a successful IPO. Please give a couple examples of others you have invested in and how they are uniquely creating and/or analyzing visual data.

Matt: Yes, that is correct about Pinterest, and indeed the right way to think about their business is “visual search” rather than “social networking”.  Computer vision is at the core of what they do.

In terms of other examples,  HyperScience is a really interesting one, not just because they’re a fast-growing company, but also because they have a less obvious use case: they use computer vision and image recognition technologies to automate back office functions for Fortune 1000 companies and the government.   They can automatically extract data from massive volumes of back office documents like forms and invoices, with very impressive levels of accuracy.

Another example would be Optimus Ride, a Boston-based self-driving vehicle technology company with roots from MIT.  They heavily leverage computer vision, that’s pretty much of the core of their technology.

Big Data & AI” landscape by Matt & FirstMark

Big Data & AI” landscape by Matt & FirstMark

Evan: In the next 10 years - which business sectors will be the most disrupted by computer vision and why?
Matt: The two obvious candidates are medical imaging and transportation (cars, trucks). The latter will take a few more years, but will ultimately happen.  The former is already under way, even though it has tiny industry penetration as of today. By the way, I do firmly believe that computer vision will “enhance”, rather than “disrupt” areas like radiology or pathology, as it will enable doctors to focus on the more tricky cases.

But there are so many other use cases beyond those: security, retail (self-service check-out), industrial (quality control), agriculture, advertising, sports, etc.  Computer vision is an immensely fertile area for entrepreneurs and investors!


Evan: We are both passionate about building communities. Our LDV Capital fund organizes the annual LDV Vision Summit and monthly LDV Community dinner which include thousands of members. You organize two monthly events, Data Driven NYC and Hardwired NYC.  Why do you organize these events and what has surprised you the most from organizes these gatherings?

Matt: Yes, I’ve been running Data Driven NYC and Hardwired NYC for a number of years now, and those communities have grown quite a bit -- 25,000 members in total.

Frankly, there was never really a goal when I started Data Driven NYC.  I had access to a nice room and thought it’d be fun to get a few people together to geek out about data stuff. I knew I wanted to the event to be free, open and inclusive, but there wasn’t much of a plan beyond that.  Things sort of took off from there.

But it’s certainly been the gift that keeps on giving.  Those events have been a tremendous source of insights for me, which have made me a better investor and board member.  It’s been wonderful networking for sure. For some reason, the quality of people who show up at those events has been outstanding (to an extent I was not fully expecting, perhaps that was the surprise),  The communities have become a great talent pool for the FirstMark portfolio.

Beyond that, there’s the immense satisfaction of doing something for the broad community, without expecting anything immediate and tangible in return, a “pay it forward” kind of thing.  A bunch of people found their next career opportunity, companies were started after co-founders met at the event, etc… lots of awesome stuff all around.


Evan: LDV Capital started in 2012 with the thesis of investing in people building visual technology businesses and some said it was “cute, niche and science fiction.” How would you characterize visual
technologies today and tomorrow.

Matt: Well, I’m sure whoever told you that is probably eating their words now (laugh).  That time period (2010-2012) is really when deep learning exploded as a technology that is truly ready for prime time, and it works particularly well for all image-related stuff, so the timing was perfect.  I think we are still early in the overall “productization” of deep learning, and there’s a long list of “take problem X and add AI to it” which haven’t been fully explored yet.

Evan: You frequently experience startup pitches. What is your one sentence advice to help entrepreneurs improve their odds for success.
Matt:
The Number One thing I’m looking for in a pitch is impeccable clarity of thought.  Great pitches tend to be very compelling in a “this is where the world is inevitably going” kind of way, but on top of that, the best entrepreneurs tend to be hyper-precise and sophisticated about every nuance of the problem they are addressing and the business they are building.  When you ask a question, they are just miles ahead of you, they thought of everything. You can feel increasing excitement within yourself throughout the meeting that you just want to join them on that journey and work with that team for the next few years of your life.

Matt Turck (left) hosting Florian Douetteau, CEO of Dataiku (right) at Data Driven NYC

Matt Turck (left) hosting Florian Douetteau, CEO of Dataiku (right) at Data Driven NYC


Evan: What are you most looking forward to at our 6th LDV Vision Summit?
Matt:
The whole thing, pretty much.  You do a wonderful job getting this community together, and the agenda is a really compelling overview of the various current and future topics relevant to computer vision.

Any Humans Assessing Images Today will be Augmented in the Future

(L to R) Zavain Dar and Chris Gibson of Recursion Pharmaceuticals at Goldman Sachs in NYC

(L to R) Zavain Dar and Chris Gibson of Recursion Pharmaceuticals at Goldman Sachs in NYC

Zavain Dar is a Partner at Lux Capital. He invests in companies that are using machine learning and AI to augment or replace physical-world functions including biology, language, manufacturing and analysis. He looks for entrepreneurs that can use software and data to hone a philosophical position on where the world is, and how to direct it for the better.

Zavain will be sharing his knowledge on trends and investment opportunities in visual technologies as a panelist and an ECVC judge at the 6th Annual LDV Vision Summit May 22 & 23. Regular tickets are available through May 12 , get yours now to come see Zavain and +60 other top speakers discuss the cutting edge in visual tech.

In lead up to our Summit, Evan Nisselson, General Partner at LDV Capital asked Zavain some questions about his experience investing in visual tech and what he is looking forward to at our Vision Summit...

Evan: You have entrepreneurial experience as a founder and computer scientist before joining Eric Schmidt’s Innovation Endeavors and now a Partner at Lux Capital. You were early at Discovery Engine which was acquired by Twitter and also a cofounder of Fountainhop, one of the first hyper-local social networks. Which aspects of your expertise do you believe helps you empower entrepreneurs to succeed and why?

Zavain:
There's no singular answer here. At various times in my venture career I've drawn across all moments of prior experiences. Not just the "LinkedIn highlights" but perhaps less intuitively, the lowlights: grinding it out working odd jobs as a teenager to cover tuition for private HS while living with a single mom who herself was a graduate student; getting laid off from my first job (door to door salesman for solar panels in high school); my work advising the "Trust the Process" era 76ers where in the moment we were just about public enemy #1 in Philadelphia and on many vanity metrics arguably the worst team in the league. Seeing not only the ups, but knowing how to traverse the inevitable downs whilst keeping a cool calm head speaks volumes to most entrepreneurs. As an investor your energy and calm (or lack thereof) can often be contagious to your entrepreneurs and learning to both be aware and in control of that is just as important (if not more!) than any prior experience or axis of expertise itself. 


Evan:  Lux Capital and LDV Capital are co-investors in Clarifai. What inspired you to invest in Clarifai and why?

Zavain:
I'd known Matt since 2013 shortly after he single handedly won ImageNet and had just started Clarifai, and while I was still at Innovation Endeavors. Tracking Matt over the years gave us a keen sense of confidence in his unique ability to not only build world class technology but build and lead a team. In a lot of ways Matt's been emblematic of many of our entrepreneurs where he's really seen success in everything he's put his time and energy into (I'll have to share more about him scoring 70 points in a HS basketball game another time). We've learned to see historical horizontal success as a leading proxy for future entrepreneurial success. Thematically, I don't think anyone can argue our computers in the future won't be able to see and understand what's around them.  With all that's going on in and around privacy with the large tech incumbents the question for us was could there be an independent franchise that could truly deliver best of class technology and products while retaining neutrality and customer protection against larger tech hegemony. Around our partnership we were an emphatic yes.  

(L to R) Sean Gourley of Primer, Chris Gibson of Recursion, and Zavain Dar in Abu Dhabi

(L to R) Sean Gourley of Primer, Chris Gibson of Recursion, and Zavain Dar in Abu Dhabi

Evan: You invested in Recursion Pharmaceuticals and their co-founder/CTO Blake spoke at our 2017 LDV Vision Summit.  We recently published an in depth report on Nine Sectors Where Visual Technologies will Improve Healthcare Over the Next 10 Years. There are many visual technology companies working to develop drugs. How and why did you choose to invest in Recursion Pharmaceuticals?

Zavain:
Venture's a game of balancing long tail serendipity hunting alongside acute deep dives to build theses and strong POVs to have a prepared, informed and agile mind when opportunities do arise. Frankly we got exceptionally lucky with the process on Recursion. I'd been teaching a seminar at Stanford for a number of years at the intersection of AI and Philosophy largely on how philosophical shifts underlying our practice of AI were redefining how we put to action the Scientific Method itself. In the past I've playfully called this "Radical Empiricism". When I first met Recursion founders Chris and Blake in 2016 within 5 minutes I knew I wanted to be partnered with them, as what they were working on was an almost perfect instantiation of what I'd been talking about in my seminar at Stanford. You'll have to remember that in 2016 this wasn't a hot space, and applying computer vision and deep learning to assess the morphological changes of various perturbations to myriad human cell lines wasn't an obvious idea. "Would there be enough signal to noise?" "Why hadn't others done it?" "Cell screening has been around for years, how is this different?" Over the last few years we've been able to answer all of these questions from what I call "informed skeptics" with increasing conviction based on data and tangible preclinical and clinical results. Candidly I don't know of any other companies then that were applying computer vision to drug discovery as a foundational substrate to a larger platform, but I suspect Recursion's ability to show exceptionally high signal to noise from its approach has encouraged a handful of upstarts to join the fold. 

The entire Lux Capital team in NYC

The entire Lux Capital team in NYC

Evan: In the next 10 years - which business sectors will be the most disrupted by computer vision and why?

Zavain:
The above on computer vision in and around biotech I think still holds true in spades, and more broadly any industry that has historically relied on humans for any sort of image analysis should be empowered and augmented with computer vision. Technologically it's no longer controversial to say computers can "see" across more data and per datum quantify a larger number on non linear features and relationships. That is computers can both see more per image and see across more images, so it's reasonable to assume any humans assessing images today will be augmented in the future. From a business perspective this touches everything. I candidly spend more time thinking about how we'll integrate computer vision across the nuances and work flows of heterogeneous industries in ways that are sympathetic to problems unique to individual industries but productize across various industrious while retaining efficiency with scale. 

Evan: LDV Capital started in 2012 with the thesis of investing in people building visual technology businesses and some said it was “cute, niche and science fiction.” How would you characterize visual technologies today and tomorrow.

Zavain:
Visual technologies (from self driving cars to facial recognition for mobile security) have undeniably arrived. The question is longer "are these science fiction?" or "is it possible?" but rather what will the path to pervasive distribution look like and commercially who will the winners be? 

(L to R) Rafae (Lyft), Nan (my Stanford coteacher, Obvious Ventures), Grace Chou (Felicis), me, and Ron Alfa (Recursion) at Sun Dance

(L to R) Rafae (Lyft), Nan (my Stanford coteacher, Obvious Ventures), Grace Chou (Felicis), me, and Ron Alfa (Recursion) at Sun Dance

Evan: You frequently experience startup pitches. What is your one sentence advice to help entrepreneurs improve their odds for success.
Zavain:
Take time to understand the investor, their portfolio, their areas of interests, and their particular lens on the world; for any company in their portfolio try and understand how that investment came to be and how your path could compare.  


Evan: What are you most looking forward to at our 6th LDV Vision Summit?
Zavain:
Learning what everyone else is up to, what problems everyone is facing, and how different teams are finding creative and disruptive solutions. 

Check out the other phenomenal speakers and the agenda. Don’t wait, get your tickets now for our sixth annual LDV Vision Summit!

Visual Technologies are Progressing Rapidly as Computing Power Grows

(L to R) Ted Hou, Laura Smoliar, Drew Lanza - Founders of The Berkeley Catalyst Fund

(L to R) Ted Hou, Laura Smoliar, Drew Lanza - Founders of The Berkeley Catalyst Fund

Dr Laura Smoliar is a Founding Partner at The Berkeley Catalyst Fund where she focuses on pre-seed and seed stage companies in biopharma, agriculture, medical device, clean air, clean water, energy storage, and sensors.

Laura will be sharing her knowledge on trends and investment opportunities in visual technologies as a panelist and startup competition judge at the 6th Annual LDV Vision Summit May 22 & 23. Regular tickets are available through May 12 , get yours now to come see Laura and +60 other top speakers discuss the cutting edge in visual tech.

In lead up to our Summit, Evan Nisselson, General Partner at LDV Capital asked Laura some questions about her experience investing in visual tech and what she is looking forward to at our Vision Summit...

Evan: You recently co-founded The Berkeley Catalyst Fund with a hybrid fund structure which invests in Seed/Series A companies and includes a sister philanthropic fund. Can you tell us what is unique about your fund, why you created this hybrid structure and what sectors you are focusing in?

Laura: The Berkeley Catalyst Fund (BCF) is a standard GP/LP venture capital fund with a sister philanthropic fund managed by the UC Berkeley Foundation, which is an LP in the BCF. The UC Berkeley Foundation enables donors to contribute to building the entrepreneurial ecosystem, and we share returns on the fund with the UC Berkeley Foundation for the benefit of the College of Chemistry at UC Berkeley. Our hybrid structure allows us to comply with tax laws and conflict of interest rules; it is now being replicated in other parts of the University of California System. BCF has a sector focus on life sciences, cleantech, agriculture, and sensors. We source broadly in the Bay Area ecosystem and do not require a connection to UC Berkeley.

Evan: You have entrepreneurial experience as founder of Mobius Photonics which was acquired by public company IPG Photonics. You have worked in diverse hardware industries including data storage, displays, lasers, and biotech instrumentation. Which aspects of your expertise do you believe helps you empower entrepreneurs to succeed and why? Are there certain types of entrepreneurs that you prefer working with?

(L to R) Laura Smoliar and Serena Tan of Morrison & Foster at Morrison & Foster’s Global Venture Summit 2018

(L to R) Laura Smoliar and Serena Tan of Morrison & Foster at Morrison & Foster’s Global Venture Summit 2018

Laura: I have managed many complex engineering development programs that require cross-functional teams covering different areas of expertise. Many of these included international partners. That past experience is often helpful to the entrepreneurs I work with now.

 I like to work with entrepreneurs that see investors as part of the team, who are open and transparent, and who like to solve problems together. I work with first time entrepreneurs as well as very seasoned CEOs in the portfolio, and I enjoy working with both.

Evan: You invested in Invenio Imaging. We recently published an in depth report on how Nine Sectors Where Visual Technologies will Improve Healthcare by 2028.  What excites you about Invenio Imaging and why did you choose to invest in them?

Laura: The Invenio team is very accomplished, very dedicated, and very impressive. We always look at team first. They understand their domain deeply, and they are solving an acute problem for surgeons by providing rapid digital pathology. The underlying fiber laser technology is very familiar for me as I worked in that field previously, and the development of AI for analyzing the images is exciting and has great potential in the future.

 

Evan: You have invested in other visual technology companies. Can you give us another example and what inspired you to invest?

Laura: We are also invested in a LIDAR company called Oyla. Again, we were drawn to the team, who have a successful track record and work well together. They also have a great board.


Evan: In the next 10 years - which business sectors will be the most disrupted by computer vision and why?

Laura: I see the confluence of more powerful imaging and detector technology plus AI strongly impacting medical and automotive sectors, two areas where we have invested, but also agriculture, advanced manufacturing, consumer electronics, and security. In all cases, processing information rapidly in a way that is actionable is enabled.

 

Evan: LDV Capital started in 2012 with the thesis of investing in people building visual technology businesses and some said it was “cute, niche and science fiction.” How would you characterize visual technologies today and tomorrow.

Laura: I agree that investing in people in the key. Visual technologies are progressing rapidly as computing power grows and is more accessible to all. I would say visual technologies are fundamentally enabling, allowing us to understand large data sets in relevant ways, more rapidly. This enables more informed decisions.

(L to R) Drew Lanza of The Berkeley Catalyst Fund, and the DNALite founders, Tim Day, CSO and Mubhij Ahmad, CEO, and Laura Smoliar

(L to R) Drew Lanza of The Berkeley Catalyst Fund, and the DNALite founders, Tim Day, CSO and Mubhij Ahmad, CEO, and Laura Smoliar

 Evan: You frequently experience startup pitches. What is your one sentence advice to help entrepreneurs improve their odds for success.

Laura: We focus on the team first, so I am looking to understand what makes the entrepreneur tick. Why are they excited about what they are doing and can they get others to follow them? My advice: practice pitch on people outside of your field and make sure you can connect with them, excite them, and inspire them.

Evan: What are you most looking forward to at our 6th LDV Vision Summit?

Laura: I look forward to meeting early stage entrepreneurs and investors who are interested in vision in all its various forms.

Check out the other phenomenal speakers and the agenda. Don’t wait, get your tickets now for our sixth annual LDV Vision Summit!

Visual Technologies Are the Eyes of the Intelligent Computer

Rachel Lam, Co-Founder & Managing Partner at Imagination Capital

Rachel Lam, Co-Founder & Managing Partner at Imagination Capital

Rachel Lam is the co-founder and Managing Partner of Imagination Capital. Prior to launching Imagination Capital, Rachel founded the Time Warner Investments group in 2003, and was head of the strategic investing arm of Time Warner Inc. for 14 years.  Rachel managed Time Warner's investments in and exits from many portfolio companies, including: Maker Studios (sold to Disney), Bluefin Labs (sold to Twitter), Admeld (sold to Google), Playspan (sold to Visa), MediaVast (sold to Getty Images), CrowdStar (sold to Glu Mobile), Kosmix (sold to Walmart), iSocket (sold to the Rubicon Project), ScanScout (sold to Tremor Video), Glu Mobile (NASDAQ: GLUU) and Turbine (sold to Warner Bros). 

Rachel will be sharing her knowledge on trends and investment opportunities in visual technologies as a panelist and startup competition judge at the 6th Annual LDV Vision Summit May 22 & 23. Regular tickets are available through May 12 , get yours now to come see Rachel and +60 other top speakers discuss the cutting edge in visual tech.

In lead up to our Summit, Evan Nisselson, General Partner at LDV Capital asked Rachel some questions about her experience investing in visual tech and what she is looking forward to at our Vision Summit...

Evan: You founded the Time Warner Investments group in 2003 and now you are Co-Founder of Imagination Capital along with Richard Parsons. What is your most important value add to help your portfolio teams succeed and why?

Rachel: I think our most important value add is experience, judgement and perspective gained from years of investing in and managing businesses.  As Dick would say, we are "long in tooth" (a nice way of saying we're old), and having lived through the Internet Bubble of 1999-2000 and the AOL Time Warner combination, we've been through a number of investment and business cycles, which gives both of us good perspective.  We don't panic if things go differently than we expected--so many successful businesses go sideways at some point. So, hopefully, I can provide thoughtful feedback to portfolio companies as they grow, hire and particularly when they raise subsequent financing. We also have clear industry expertise in the media space and bring meaningful networks to help our portfolio companies succeed.  I have more VC and strategic corp. dev. relationships, and Dick brings the Fortune 500 rolodex which can be important to tap at the right moment.


Evan: You have extensive expertise in how technology has empowered and disrupted the media industry since you joined Time Warner in 1996.  What role has visual technologies played in this evolution? How will your insights impact your future investing strategy?

Rachel: Wow, if you think back to 1996, what a different world it was for the media industry and what "visual technology" meant back then. Visual technology in 1996 was probably the delivery of video programming through cable headends, and we spent a lot of time talking about bandwidth constraints, video compression technologies and whether cable plant could be used for IP telephony and broadband delivery (remember this was early days of the internet and most folks had dial-up internet access, if that!).  Time Warner bought Turner Broadcasting in 1996--a deal both Dick Parsons and I worked on--and that was for its innovative and cutting edge linear cable networks, CNN, TBS, TNT and Cartoon Network. At the time, the new cable networks were disrupting the broadcast television industry, bringing the idea that programming could be tailored to specific audiences. And, the first VOD technology was developed by one of Time Warner's portfolio companies, N2 Broadband, which brought "on demand" video programming to the consumer, which was the beginning of consumers being able to watch some of "what they wanted, when they wanted" (but not "where" they wanted as it was still tethered to the TV screen).  Visual technologies over the past twenty years have been inextricably linked to the entertainment and information video content creation and distribution market for businesses and individual consumers--enabling the amazing CREATION of special effects for filmed entertainment and incredible video games, as well as specialized AR and VR content and equally importantly, the DISTRIBUTION of this visual content that allows consumers to now get content EVERYWHERE. Visual technologies powered incredible consumer shifts in how, when and where they consumed video content, which then rippled through the adjacent advertising market.

Today, the most interesting thing developing now is that visual technologies are now becoming intelligent with computer vision, and so now visual technologies go beyond entertainment/informational video products to now being the eyes of an intelligent computer so that visual technologies now enable new enterprise applications powered with AI and machine learning.

Rachel speaking at Time Warner Portfolio Day

Rachel speaking at Time Warner Portfolio Day

Evan: Which businesses do you believe will be disrupted the most from visual technologies over the next 10 years?

Rachel: While I think there will still be innovation in the forms of video and games content created and distributed by emerging visual technologies over the next 10 years, thus continuing to disrupt the media/filmed entertainment/video content and video games industry, I think the major paradigm shifts will be in those industries that computer vision and processing of huge amounts of video data will bring new capabilities to--security, city management, even farming (where processing drone footage might bring new efficiency and yield), and hopefully the auto industry (via autonomous vehicles). Also, industries where new VR and AR products greatly enhance current capabilities and the form factor is small consideration (so nobody cares if the headset is clunky) , such as in health care applications or businesses like oil rig repair where the cost of human error is very high.  So I see lots of disruption from visual technologies in the enterprise-oriented markets and smaller shifts in the media/video content industry.


Evan: What are you most looking forward to at our 6th LDV Vision Summit?

Rachel: Great discussion, new and fresh ideas that challenge current accepted norms, and awesome networking.


Evan: You frequently experience startup pitches. What is your one sentence advice to help entrepreneurs improve their odds for success.

Rachel: Really be able to articulate the industry/paradigm shift that your company captures, why your product and go-to-market strategy will capture this shift and outpace competition, and finally, show some initial traction that demonstrates that you can execute on your strategy.

From Publisher of Spy Magazine to Reimagining Business Insights as Entertainment

TomPhillips01.png

As we gear up for our 6th annual LDV Vision Summit, we’re highlighting some speakers with in-depth interviews. Check out the full roster of our speakers here. Regular Price tickets now available for our LDV Vision Summit on May 22 & 23, 2019 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss how visual technologies are empowering and disrupting businesses. Register for tickets now!

The term “cult following” is typically reserved for movies or books, not defunct media enterprises from the pre-internet era. But that’s what comes to mind with Spy magazine. Spy’s founding publisher Tom Phillips is quick to point out that the satirical monthly magazine that captured urban discourse in the late ‘80s never grew beyond 160,000 readers, minuscule by contemporary heavyweights.

Still, the significance of Spy can’t be overstated. In Tom’s case, the success of the magazine set him on a far-reaching trajectory that included multiple businesses sold, a nearly four-year stint at Google, and in the present at the helm of a company, Section4, that’s aiming to do for business what Netflix did for entertainment.

In this article, Tom talks about how Section4 will take business intelligence to a whole new level. Read on to learn about his entrepreneurial journey, his tips for startup founders, why he’s excited for our LDV Vision Summit this May, and more.

How Spy Reinvented the Magazine

Tom Phillips makes what some might consider a controversial statement. When asked which entertainment outlets with personalized feeds are doing algorithms right, he says no one is. “Netflix, I gotta say is kind of disappointing,” he adds.

When a serial entrepreneur calls a company with a $150 billion-dollar market cap disappointing you know they have something big in the works.


“Spy was the first hyperlinked publication before there were hyperlinks.”


As the founding publisher of Spy, Tom was at the forefront of media in a time when “clicks” didn’t count. Still, not being an Internet entity — Spy shuttered in 1998, well after Tom departed, without ever having a web presence — didn’t stop it from becoming an outlet that broke ground. In many ways, Spy’s successes and distinctions popularized and foreshadowed elements of media that prevail in today’s digital age. “[Spy] was known for really reinventing the way magazine graphics were done,” says Tom. “Spy was the first hyperlinked publication before there were hyperlinks.”

Spy ’s founding team: Tom Phillips, Kurt Andersen, and Graydon Carter for Barneys, 1988.  Photo by Annie Leibowitz

Spy’s founding team: Tom Phillips, Kurt Andersen, and Graydon Carter for Barneys, 1988.
Photo by Annie Leibowitz

He explains, “This whole idea that there were endless layers of information was something that Spy brought to life and it brought it to life editorially and graphically. So in a given article you would have not played the article straight like a New Yorker article, or the article played straight with a sidebar like a TIME magazine or Fortune magazine article. Or an article with a certain arc that ran for X thousand words and then had sort of a subset piece here and a subset piece here or a linked article like an Esquire or Rolling Stone.”

“A given article would have 10 different sidebars. It would have data attached to it. It would have charts. Some of the charts were kind of serious, some of them were just funny ways to show information.”

Some of the concepts driven by Spy seem especially pertinent in today’s media-rich digital landscape.

“It was a visualization of information, of editorial point of view, and of data that really broke new ground,” says Tom. “The editor of Entertainment Weekly at the time will tell you this if you can track him down. They basically looked at Spy and said, ‘Let's do that for a mainstream audience.’”

Photo courtesy Tom Phillips

Photo courtesy Tom Phillips

From Old Media to New Media

Tom moved on from Spy when he started getting the sense that it was time to shift his focus to the then-nascent Internet. “When I left Spy, it was the magazine I'd always dreamed of. And I didn't want to stay in the magazine [business]. I had a little bit of foresight at the time to think that the idea of print journalism doesn't really have legs as we move into a future that's looking like it's gonna be digital, computer-based.” From Spy, Tom has had a career that follows old media’s transformation to new media: from President of ABC News Internet Ventures to ESPN to Deja.com (sold to Google and eBay in 2001) to Google (as the director of search & analytics) to Dstillery, where he served as CEO.

After Spy, Tom says the shift went from visual depiction of media to a data focus. “We chose to do a web-based sports service because bandwidth was so constrained,” he says of his time at Starwave. “And with sports, scores and headlines tell a big part of the story.”

“We could do box scores, we could do real time scores, we could do headlines. We could do headlines from different perspectives. The stuff that we could do with very limited bandwidth. Where with most information intensive and entertainment oriented media, limited bandwidth is just death. There's just nothing you can do.”

Tom’s path reflects less a diminishing interest in the media-rich imagery that Spy was known for and more the limitations. To hear him tell it, the industry has just been catching up to the big things that are possible with high bandwidths and a visual focus.

“The big winners of the 90s were Yahoo, Amazon and Ebay. So which one of these is producing content? None of them at that stage. They're all just channeling user generated information and selling stuff, right? And even selling stuff was user generated at Ebay.

“So I just figured, you know, as much as I loved being a magazine publisher and being a publisher, and being a creator of inspiring and rich entertaining information, I went the other direction. I went more and more toward data centricity and abstraction. And really only with this venture, with Section4, am I back to, ‘Oh, okay. It's now 2019. We can now produce incredibly rich content and make a business out of it.’"

Tom playing in a house band called the  Algorhythms (courtesy Tom Phillips)

Tom playing in a house band called the Algorhythms (courtesy Tom Phillips)

Coming Full Circle with Section4

The thesis at Section4 is simple, but ambitious, and it’s something Tom says no one else is doing: “If we can generate business insights for professionals and actually deliver them like TV, then we can create the business insights Netflix.That we can deliver a whole smorgasbord of great entertainment that is also edifying to professionals.”

“Business media today is stuck in the 20th century,” says Tom. “It's all linear TV and text-based news and analysis. To translate that into a rich digital medium is a lot of work. It's hard, but we can do that. We're convinced that our approach will appeal to 32-year-olds, not 68-year-olds.”

The key is that cutting-edge technologies like AI and machine learning haven’t even come into play yet in media. “We're fully capable of thinking in those terms and I've run companies that are big data-based AI companies. That's not the domain that is important here. What's important here is paying attention to what people need professionally and respond to emotionally.”

The goal, says Tom, is “to create great short form TV out of professional services, in a professional services domain. No one's done that before.”

Algorithms, although essential, aren’t the hard part. “Our content, because we're professional, is much easier to quantify and categorize and create meta tags around. The algorithms are gonna be easy once we get them.” It will be a step up from Netflix because “[Netflix is] so squishy in terms of what the content is. It's hard to capture it in any kind of meta sense.”


“When people see it, they're gonna say, ‘Whoa. I didn't know I could be entertained and edified at the same time. I didn't know I could be professionally enhanced while I watched something really fun.’”


What’s more critical than any technology or algorithm is what media companies are still struggling with: attention and monetization. Section4 is ready to take advantage of the critical mass that media has reached.

“Getting attention in a crowded landscape is the biggest [constraint]. And then convincing people that this is good enough to pay for. That has changed dramatically in the last couple years. It used to be nothing was good enough to pay for, and partly because nothing was good enough to pay for.

“And now you have all these over-the-top subscription services that are all consumer-based, entertainment-based. We’re [trying to] create a network that's professional, that's even more premium-priced, and we think we can.”

What Visual Data Means for Publishing

As Section4 gears up for launch, Tom reflects on the future, and highlights where he thinks publishing still has a ways to go. “We live in a world where our access to data, and thereby our interest, is increasing by a multiple every year. It's crazy in terms of what's available to us.”


“The visualization of the data, to making it meaningful and digestible, will make great leaps in the next few years.”


“It has to happen because people are overwhelmed. And people need to see it to understand it. There's so much to be done. That's the thing that will be on everybody's mind and will make great strides, and frankly, we'll change the way we communicate.”

LDV Vision Summit and Tips for Founders

As he gets ready for LDV Vision Summit next month, where he’ll be a speaker and competition judge, Tom praises the diversity that he knows will be present.

“As an old white guy ... I know I'm gonna be in the minority, not in the majority. I spent my whole professional life being in the majority, and I like not being in the majority. It's kind of cool. It's refreshing. It's good for me and good for everyone.”

Last tips for the founders who’ll present onstage in the LDV Vision Summit’s two competitions: confidence is key.

“You may not have a unique vision or be the most competent and qualified person to start this business, but you better believe that both of these things are true.”

As for what founders can learn from his journey, Tom adds, “I spent most of my career chasing and striving great business ideas and trying to build wildly successful businesses.  In retrospect, I spent too little of it building products I love. The combination, of course, is magical.”

If you’re building a unique visual tech company, we would love for you to join us. At LDV Capital, we’re focused on investing in deep technical people building visual technology businesses. Through our annual LDV Vision Summit and monthly community dinners, we bring together top technologists, researchers, startups, media/brand executives, creators and investors with the purpose of exploring how visual technologies leveraging computer vision, machine learning and artificial intelligence are revolutionizing how humans communicate and do business.

2019 LDV Vision Summit Visual Technology Trends

Rebirth of Medical Imaging by Daniel Sodickson, Vice-Chair for Research, Dept of Radiology, Director, Bernard & Irene Schwartz Center for Biomedical Imaging. Principal Investigator, Center for Advanced Imaging Innovation & Research at New York University Langone Medical Center ©Robert Wright/LDV Vision Summit 2018

Rebirth of Medical Imaging by Daniel Sodickson, Vice-Chair for Research, Dept of Radiology, Director, Bernard & Irene Schwartz Center for Biomedical Imaging. Principal Investigator, Center for Advanced Imaging Innovation & Research at New York University Langone Medical Center ©Robert Wright/LDV Vision Summit 2018

We launched the first annual LDV Vision Summit five years ago, in 2014, with the goal of bringing together our visual technology ecosystem to explore how visual technology is empowering business and society.

Our gathering is built from the ground up, by and for our diverse visual tech ecosystem - from entrepreneurs to researchers, professors and investors, media and tech execs, as well as content creators and anyone in between. We put our Summit together for us all to find inspiration, build community, recruit, find co-founders, raise capital, find customers and help each other succeed.

Every year we highlight experts working on cutting edge technologies and trends across every sector of business and society.  We do not repeat speakers and we are honored that many attendees join every year.

Below are many of the themes that will be showcased at our 6th LDV Vision Summit May 22 & 23 in NYC. Register here and hope to see you this year.


Visual Technologies Revolutionizing Medicine

Visual assessment is critical to healthcare — whether that is a doctor peering down your throat as you say “ahhh” or an MRI of your brain. Over the next ten years, healthcare workflows will become mostly digitized, more personal data will be captured and computer vision, along with artificial intelligence, will automate the analysis of that data for precision care. Our speakers will showcase how they are deploying visual technologies to revolutionize medicine:

  • CIONIC is superpowering the human body.  

  • Ezra is the new way to screen for prostate cancer.  

  • MGH/Harvard Medical School has developed AI that is better than most experts at diagnosing a childhood blindness disease.

  • Teledoc Health provides on-demand remote medical care.  

  • and more...


Computational Imaging Will Power Business ROI

Computational imaging refers to digital image capture and processing techniques that use digital computation instead of optical processes. Entrepreneurs and research scientists from Facebook, Sea Machines, Cornell Tech, and University College London will enlighten on how their research is delivering valuable results.

  • GM and Cruise Automation are using state-of-the-art software and hardware to create the world's first scalable AV fleet.

  • The inference of 3D information from the video acquired from a single moving camera.

  • Deep convolutional neural network (ConvNet) for multi-view stereo reconstruction.

  • Image Quality and Trust in Peer-to-Peer Marketplaces.

  • and more...

Synthetic Data Is Disrupting Legacy Businesses

Synthetic data is computer-generated data that mimics real data; in other words, data that is created by a computer, not a human. Software algorithms can be designed to create realistic simulated, or “synthetic,” data. This computer generated data is disrupting legacy businesses including Media Production, E-Commerce, Virtual Reality & Augmented Reality, and Entertainment. Experts speaking on this topic include:

  • Synthesia Delivers AI-driven video production.

  • Forma Technologies is building photorealistic avatars that are a dynamic form for people’s online identity.

  • and more...


Where Are The Next Visual Tech Unicorns?

A large number of visual technology businesses have already broken the $1B ceiling: Pinterest, DJI, Magic Leap, Snap, Zoom, Zoox, etc. With applications of computer vision and machine learning on an exponential rise, top investors in visual technologies will discuss the sectors and trends they see with the most potential for unicorn status in the near future:

  • Nabeel Hyatt, Spark Capital

  • Rachel Lam, Imagination Ventures

  • Matt Turck, FirstMark Capital

  • Laura Smoliar, Berkley Catalyst Fund

  • Hadley Harris, Eniac

  • Zavian Dar, Lux Capital

  • and more...

Experiential Media is the Future

The Internet and digital media have built reputations for nameless, faceless actors and disconnection but advances in tech and new approaches are changing that. Whether through interactive video or a live music video game, visual technologies are creating experiences that connect people to content & each other.

  • FTW Studios is creating experiences designed to bring people together — to be a part of live, shared moments.

  • Section4 is reinventing professional operational media, making it succinct, discoverable, provocative and actionable.

  • Eko is an interactive storytelling platform that continues to be a leader in Choice Driven Entertainment.

  • and more...

Nanophotonics are Pushing the Envelope

Nanophotonics can provide high bandwidth, high speed and ultra-small optoelectronic components. These technologies have the potential to revolutionize telecommunications, computation and sensing.

  • Voyant is creating the next generation of chip-scale LIDAR

  • MacArthur Fellow Michal Lipson is a physicist known for her work on silicon photonics. Working on many projects such as drastically lowering cost and energy of high power computing for artificial intelligence.

  • and more...


Factory to Front Door, Visual Tech is Improving Logistics

Breakthrough visual technologies will transform the speed, safety and efficiency of agriculture, manufacturing, supply chain and logistics. Legacy actors and startups alike are finding fascinating use cases to implement computer vision and machine learning to improve their processes:

  • Plus One’s software & computer vision tackles the challenges of material handling for logistics.

  • Level 4 Autonomous Vehicles for Urban Logistics

  • and more...

The Next Decade Will See Trillion Dollar Sectors Disrupted by Visual Technologies, According to Hadley Harris

Hadley Harris, Founding Partner at Eniac, Captured by DepthKit

Hadley Harris, Founding Partner at Eniac, Captured by DepthKit

Hadley Harris is the Founding General Partner at Eniac. He has done a little bit of everything on the path to co-founding Eniac, starting out as an engineer at Pegasystems, a product manager at Microsoft and a strategist at Samsung. He ran a few aspects of the business across product and marketing at Vlingo prior to its sale to Nuance. He also served as CMO at Thumb until it was acquired.

Hadley will be sharing his knowledge on trends and investment opportunities in visual technologies as a panelist and startup competition judge at the 6th Annual LDV Vision Summit May 22 & 23. Early Bird tickets are available through April 16 , get yours now to come see Hadley and +60 other top speakers discuss the cutting edge in visual tech.

In lead up to our Summit, Evan Nisselson, General Partner at LDV Capital asked Hadley some questions about his experience investing in visual tech and what he is looking forward to at our Vision Summit...

Evan: You have extensive entrepreneur, technical and business operation experience before you co-founded Eniac Ventures. Which aspects of your expertise do you believe helps you empower entrepreneurs to succeed and why?

Hadley
: I was very fortunate to be a senior leader at two startups that were acquired. I worked with a bunch of super talented entrepreneurs and executives that I learned a lot from. I was also lucky to have some great investors and board members who helped me see how VC’s could help empower teams to thrive. Interestingly, what stuck with me the most is some of the terrible behavior I witnessed by VC’s during fundraising -- spending the whole meeting on their phones, leaving the room several times during the pitch, eating 3-course meals without looking up at what we were presenting. I told myself that if I ever became a VC I’d focus on helping entrepreneurs with empathy and respect for the amazingly difficult task they had chosen.


I consider the most interesting technological theme right now to be the way data and machine intelligence are changing every industry and our daily lives. A strong argument could be made that visual technology is the most important input to data+machine intelligence systems

-Hadley Harris


Evan: Eniac has invested in many visual technology businesses that leverage computer vision. Please give a couple of examples and how they are uniquely analyzing visual data.

Hadley:
By my count, 30% of the investments we’ve made over the last few years have a visual technology component. A handful are in autonomy and robotics. For example, iSee is an autonomous transportation company that has developed a humanistic AI that is able to flourish in dynamic and unpredictable situations where other solutions fail. Obviously, they can only do that by leveraging CV as an input to understand the vehicle’s surroundings. Another one that is really interesting is Esports One. They use computer vision to understand what’s going on in esports matches and surface real-time stats and analytics to viewers. It’s like the first down marker on steroids.  

Evan: In the next 10 years, which business sectors will be the most disrupted by computer vision and why?

Hadley:
Over the next 10 years there are a number of trillion $ sectors we’re exploring at Eniac that will be disrupted by visual technology – food & agriculture, construction, manufacturing, transportation, logistics, defense but if I were to pick one it would be healthcare. We’re already seeing some really interesting use cases taking place in hospitals but that’s just the very tip of the iceberg. When these technologies move into the home so that individuals are being monitored on a daily basis the way we think about health and wellness with dramatic change.

Evan:   We agree that visual technologies will have a tremendous impact on the healthcare industry. Actually, our annual LDV Insights deep dive report last summer analyzed Nine Sectors Where Visual Technologies Will Improve Healthcare by 2028.

Eniac & Sea Machines - [L to R] Hadley Harris (Eniac), Jim Daly (COO of Sea Machines), Michael Johnson (CEO of Sea Machines), Vic Singh (Eniac)

Eniac & Sea Machines - [L to R] Hadley Harris (Eniac), Jim Daly (COO of Sea Machines), Michael Johnson (CEO of Sea Machines), Vic Singh (Eniac)

Evan:  Eniac and LDV Capital are co-investors in Sea Machines who capture and analyze many different types of visual data to deliver autonomous workboats and commercial vessels. What inspired you guys to invest in Sea Machines?

Hadley:
We’ve had a broad thesis over the last 4 years that everything that moves will be autonomous. When investment in the best autonomous car and truck companies became prohibitively competitive and expensive we started looking for underserved areas where autonomy could drive significant value. This drove us to look at the autonomous boat space.  We found a few teams working on this problem, by far, the best of which was Sea Machines. They stood out because they married strong AI and CV abilities with a very deep understanding of the navel space based on decades in the boating ecosystem.

Evan: LDV Capital started in 2012 with the thesis of investing in people building visual technology businesses and some said it was “cute, niche and science fiction.” How would you characterize visual technologies today and tomorrow?

Hadley: I consider the most interesting technological theme right now to be the way data and machine intelligence are changing every industry and our daily lives. A strong argument could be made that visual technology is the most important input to data+machine intelligence systems. So no, I don’t think visual technologies are cute, niche or science fiction; they are one of the primary drivers of the biggest technological theme of our time.

Hadley Harris, Eniac

Hadley Harris, Eniac

Evan: You frequently experience startup pitches. What is your one sentence advice to help entrepreneurs improve their odds for success?

Hadley:
Know the ecosystem your startup is playing in absolutely cold.  

Evan: What are you most looking forward to at our 6th LDV Vision Summit?

Hadley:
I’m excited to be at such a focused event where I can hear from amazing entrepreneurs and scientists about the cutting edge projects they’re working on.  

Get your Early Bird tickets by April 16th for our 6th Annual LDV Vision Summit which is featuring fantastic speakers like Hadley. Register now before ticket prices go up!

At Facebook Camera, Everyday They Think About Delivering Value to AR Users

Matt Uyttendaele, LDV Vision Summit 2018 ©Robert Wright

Matt Uyttendaele, LDV Vision Summit 2018 ©Robert Wright

Early Bird tickets now available for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

Matt Uyttendaele is the Director of Core AI at Facebook Camera. At our 2018 LDV Vision Summit , Matt spoke about enabling persistent Augmented Reality experiences across the spectrum of mobile devices. He shared how, at Facebook Camera, they are solving this and giving creators the ability to author these experiences on their platform. He showcased specific examples and highlighted future challenges and opportunities for mobile augmented reality to succeed.

Good morning LDV. I am Matt Uyttendaele. I work on the Facebook camera and today I'm going to talk about our AR efforts on smartphones.

We at Facebook and Oculus believed that AR wearables are going to happen someday, but we're not waiting for that. We want to build AR experiences into the apps that our community uses every day. That being Messenger, Facebook and Instagram. And I'm going to do a deep dive into some of those efforts.

How do we bring AR to mobile? There's three major investments that we're making at Facebook. First is just getting computer vision to run on the smartphone. We take the latest state of the art computer vision technology and get that to run at scale on smartphones.

Second, we're building a creator platform. That means that we want to democratize the creation of AR into our apps. We want to make it super simple for designers to create AR experiences on our apps.

And then we're constantly adding new capabilities. The Facebook app updates every two weeks. And in those cycles, we're adding new capabilities and I'll dive into some of those in the talk.

One of our challenges in bringing AR to mobile devices at this scale is that there's a huge variety of hardware out there, right? Some of these are obvious, like camera hardware. We need to get computer vision to run on a huge variety of phones. So that means we have to characterize exactly the cameras and lenses on all these phones. Inertial sensors are super important for determining how the phone moves. That works pretty well on the iPhone, not so much on Android. It was telling that on the Pixel 2 one of the top bullet items, marketing bullet items was IMU synchronized with camera because that enables AR. But that's a challenge that we face in bringing these experiences at scale. All told we support 10,000 different SKUs of cameras with our apps.

So let's dive a little bit into the computer vision, some of the computer vision that's running in our AR platform. On the left, take a video frame in, take an IMU data, and a user may select a point to track within that video. First we have a tracker selector that's analyzing the frame that's coming in and it is also aware of the device capabilities that we're operating on.

©Robert Wright/LDV Vision Summit 2018

©Robert Wright/LDV Vision Summit 2018

Then we've built several state of the art computer vision algorithms. I think our face tracker is probably one of the best monocular, or maybe the best monocular, face tracker out there running on a mobile phone. But we also have a simple inertial tracker that's just using the IMU. And we've implemented a really strong, simultaneously localization and mapping algorithm which is also known as SLAM. At any given time one of these algorithms is running while we're doing an AR experience. And we can seamlessly transition between these algorithms.

For example, if we're using SLAM and we turned the phone to a white wall and there's certainly no visual features to track, we can seamlessly transition back to the IMU tracker. And that lets us deliver a consistent camera pose across the AR experiences, that your AR object doesn't move within the frame. Okay, so that's the dive into our computer vision.

Here's a look at our creator platform. Here's somebody wiring up our face tracker to an experience that he has designed, right? So these arrows are designed by this. Here's similarly, somebody else taking our face tracker and wiring up to accustom mask that they have developed. So this is our designer experience in something called AR Studio that we deliver.

AR Studio is cross platform obviously because our apps running cross platform so you can build an AR experience and deliver that to both iOS and Android. It's delivered through our Facebook cameras stack, which means that runs across the variety of apps, Messenger, Facebook and Instagram. And we've enabled these AR experiences to be delivered to 1.5 billion people that run our apps. So if you build an app inside this AR Studio, you can have a reach of 1.5 billion people.


“We've enabled these AR experiences to be delivered to 1.5 billion people that run our apps.”


Okay, let me look at now a new capability that we recently delivered. This is called Target AR. And here this user is taking his phone out, pointing it at a target that's been registered in our AR Studio. So this is a custom target. And they've built a custom experience, the overlay on that target. So when we recognize that target, their experience pops up and is displayed there.

And we didn't build this as a one off experience that's shown here. We built this as a core capability into the platform. So here, our partners at Warner Brothers, at South by Southwest, deployed these posters across Austin about the time of Ready Player One launch and they use the AR studio to build a custom experience here where when we recognize that poster, their effect populations up in the app. And here's, one of my partners on the camera team, doing a demo. And that Warner Brothers experience popped up as it recognized that poster.

What I want to leave you with is we at Facebook deliver value to users in AR and that's something that we think about every day in the Facebook camera. I think I've shown you some novel experiences, but what we really strive to do is deliver real user value through these things. And that's something that, please look at what we're doing over the next year in our Facebook camera apps across Messenger, Facebook and Instagram, because that's something that we hope to achieve.

Thank you.

Watch Matt’s keynote at our LDV Vision Summit 2018 below and checkout other keynotes on our videos page.

Early Bird tickets are now available for our LDV Vision Summit May 22 & 23, 2019 in NYC to hear from other amazing visual tech researchers, entrepreneurs and investors.

Delivering AR Beyond Dancing Hotdogs

Early Bird tickets are available through April 16 for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

At the LDV Vision Summit 2018, Joshua Brustein of Bloomberg Businessweek spoke with Serge Belongie (Cornell Tech), Ryan Measel (Fantasmo), Mandy Mandelstein (Luxloop) and Jeff McConathy (6D.ai) about how the digital and physical world will converge in order to deliver the future of augmented reality.

They spoke about how the technology stack for augmented and mixed reality will need several new layers of different technologies to work well together. From spatial computing with hyper-precision accuracy in multiple dimensions to filtering contextually relevant data to be displayed to the user based on location, activity, or time of day. What are the technical challenges and opportunities in delivering the AR Cloud?

Watch and find out:

Our LDV Vision Summit 2019 will have many more great discussions by top experts in visual technologies. Don’t miss out, check out our growing speaker list and register now!

Measuring Sleep with a Camera Makes Nanit a Powerful Tool for Research

Assif Glazer, LDV Vision Summit 2018 ©Robert Wright

Assif Glazer, LDV Vision Summit 2018 ©Robert Wright

Early Bird tickets now available for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

Assif Glazer is the founder & CEO of Nanit. At our 2018 LDV Vision Summit he shared how Nanit's unique computer vision technology and product helps children and their parents sleep better. Merging advanced computer vision with proven sleep science technologies, Nanit provides in-depth data available for helping babies, and parents, sleep well in those crucial early months and years of a baby’s development. He spoke about how this technology is expandable to the greater population as well, leading to early detection of other disease states like sleep apnea, seizures, autism and more. He shared the state of the art today and how he envisions sleep tech helping society in 10 and 20 years.

Hello, everyone. I'm Assaf, the CEO and founder of Nanit. Our business is human analytics. If you are not familiar with Nanit, we sell baby monitors that measure sleep and parental interaction. We use computer vision for this purpose. And there are thousands of parents today across the U.S. that are using Nanit to sleep better.

Nanit is a camera that is mounted above the crib. The experience with Nanit is very different from any other baby monitor that you can think. I won't go much into reviews but I would say that on BGR, they wrote, "This baby monitor is so impressive, we want to have a baby just to try it."

The camera costs $279 and there is a subscription, $10 a month or $100 a year. If you look at how we do it, actually, we do it in different levels. First, we give you the best view of your child and then we give you some real-time information of what happened in the crib. It helps you to manage nap time, check your child remotely and know, “my baby woke up one hour ago, he sleep for 20 minutes.” We give you daily and weekly updates and every week, we'll also send you sleep tips and recommendations on how to improve sleep based on the personal data that we saw during the last week. And finally, we celebrate achievements and give you rewards for sleep milestones and accomplishment from the last week.


“When you measure sleep with a camera, you can also measure the environment, the behavior and build a picture around the sleep architecture.”


We measure sleep. We actually measure sleep better than the state-of-the-art medical device. There are different ways to measure sleep but when you do it with a camera, you can also measure the environment, the behavior, and build a picture around the sleep architecture. In this context of babies, we also measure the parent and we know when the parent is approaching the crib, when he's touching the baby, when he's taking the baby out of the crib or differentiate it with different kind of moment that we would like to ignore. Then, by combining these all behaviors together, along with other behaviors of the child, we can have a very precise diagnosis of on sleep issues and beyond.

This is deeply anchored in research. When we were part of the runway program at Cornell Tech - they help people looking to commercialize science - and they really helped us build collaboration between different verticals; sleep experts and psychologists, cognitive development, model development, etc. Today, we have plenty of studies in the works, in collaboration with different types of institutes we are publishing papers.

Just last month, we published a paper at the IBSA Conferences. We took three months - 175,000 nights’ sleep - we analyzed and tracked the parental intervention patterns between zero to 24 months age babies.

Assif Glazer, LDV Vision Summit 2018 ©Robert Wright

Assif Glazer, LDV Vision Summit 2018 ©Robert Wright

So Nanit is also a research tool. It's a research tool that can tell you about behaviors and sleep. Here you see across the US. For instance, you can see that babies in Denver tend to wake up one more time than in the rest of the states. I don't know the reason, but it is just a fact. We have very precise data on babies’ sleep so we can tell you every day if the sleep pattern changes. It's interesting to see, for instance, that at Thanksgiving, parents are putting their baby to sleep earlier. Maybe so they will have quality time during their dinner?

Nanit is a very powerful tool. The ability to record the night and then analyze it will serve the need of people in the medical field as well as parents. Looking at Nanit as a research tool, Nanit gives you so much information. By having Nanit in the house and monitoring thousands of normal children, we can learn more about what is normal. And if we know what is normal, then we can know what's not normal and are these the early signs of a future disease?

There are constant movements to try to identify children who are at risk for autism earlier and earlier. With this technology, we could certainly develop some normative data and to be able to identify otherwise unrecognized signs. This technology could also be used in the adult population, a hospital setting, or a hospice setting, or perhaps a nursing care setting.

It can look at restless leg movement, it can look at the breathing, and of course, sleep apnea is much more common in adults than in children. Then it can really open our eyes to things we didn't know as researchers, that we couldn't study in our own labs and can change the way we treat children and adults as well.

So Nanit is also a research tool. It's a research tool that can tell you about behaviors and sleep. Here you see across the US. For instance, you can see that babies in Denver tend to wake up one more time than in the rest of the states. I don't know the reason, but it is just a fact. We have very precise data on babies’ sleep so we can tell you every day if the sleep pattern changes. It's interesting to see, for instance, that at Thanksgiving, parents are putting their baby to sleep earlier. Maybe so they will have quality time during their dinner?

Nanit is a very powerful tool. The ability to record the night and then analyze it will serve the need of people in the medical field as well as parents. Looking at Nanit as a research tool, Nanit gives you so much information. By having Nanit in the house and monitoring thousands of normal children, we can learn more about what is normal. And if we know what is normal, then we can know what's not normal and are these the early signs of a future disease?

There are constant movements to try to identify children who are at risk for autism earlier and earlier. With this technology, we could certainly develop some normative data and to be able to identify otherwise unrecognized signs. This technology could also be used in the adult population, a hospital setting, or a hospice setting, or perhaps a nursing care setting.

It can look at restless leg movement, it can look at the breathing, and of course, sleep apnea is much more common in adults than in children. Then it can really open our eyes to things we didn't know as researchers, that we couldn't study in our own labs and can change the way we treat children and adults as well.

Nanit is the future of consumer-facing health. When we are looking at the future, you can think about application in, of course, pediatrics, but also adult sleep, elder care, big data analysis. Thank you.

Watch Assif Glazer’s keynote at our LDV Vision Summit 2018 below and checkout other keynotes on our videos page.

Early Bird tickets are now available for our LDV Vision Summit May 22 & 23, 2019 in NYC to hear from other amazing visual tech researchers, entrepreneurs and investors.

We are accepting applications to our Vision Summit Entrepreneurial Computer Vision Challenge for computer vision research projects and our Startup Competition for visual technology companies with <$2M in funding. Apply now & spread the word.

Partners at Glasswing, ENIAC, & OMERS Reveal Their Top Industries for Visual Tech

Early Bird tickets are available through April 16 for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

At the LDV Vision Summit 2018, Sarah Fay of Glasswing Ventures, Michael Yang of Comcast Ventures (now at OMERS) and Nihal Mehta of ENIAC spoke with Jessi Hempel of Wired (now LinkedIn) about the industries they think carry the most opportunity for visual technology.

Watch what they have to say about the future of transport, VR, cyber security, drones and much more…

Our LDV Vision Summit 2019 will have many more great discussions by top investors in visual technologies. Don’t miss out, check out our growing speaker list and register now!

Ezra is Revolutionizing Cancer Detection with Computer Vision and Artificial Intelligence

As we gear up for our 6th annual LDV Vision Summit, we’ll be highlighting some  speakers with in-depth interviews. Check out our full roster of our speakers here. Early Bird tickets now available for our LDV Vision Summit on May 22 & 23, 2019 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss how visual technologies are empowering and disrupting businesses. Register for early bird tickets before March 29!

Ezra CEO and Co-founder Emi Gal  (courtesy Emi Gal)

Ezra CEO and Co-founder Emi Gal (courtesy Emi Gal)

When Emi Gal was searching for the cofounder for his New York-based cancer detection startup Ezra, he pored over 2,000+ online profiles of individuals with expertise in medical imaging and deep learning. From there, he reached out to 300 individuals, conducted 90 interviews, and walked four finalists through a four-month project. The entire process took nine months.

All while still at his “day job.” For Emi, that day job was his European startup that had just been acquired by one of its largest American clients. Not your typical founder’s story, but as his meticulous cofounder search shows, Emi is not one to take a blind leap.

It’s this willingness to go methodically down the rabbit hole and to leave no stone unturned that’s defined Emi’s approach, from one venture to the next.

To date, Ezra has raised $4 million in its first round, led by  Accomplice and with participation from Founders Future, Seedcamp, Credo Ventures, Esther Dyson, Taavet Hinrikus, Alex Ljung, Daniel Dines and many others. Emi has since brought on a head of talent to help expand the team from two to 12 in four months.

In this article, Emi talks about how Ezra is changing the game for cancer detection through visual technology. Read on to learn about his scientific research-driven approach, the long-term possibilities for Ezra, and what to expect when he takes the stage at our LDV Vision Summit this May.

Photo courtesy Ezra

Photo courtesy Ezra

MISSION-DRIVEN: MAKING CANCER SCREENING MORE AFFORDABLE, ACCURATE AND NON-INVASIVE

The name Ezra means “help” in Hebrew.

It’s a spot-on moniker for a company that, although powered by artificial intelligence and visual technology, Emi says is more mission-driven than technology-driven.

Multiple components in cancer detection rely on the painstaking study and analysis of visual inputs, making visual technology ripe for leveraging.

In its current stage, Ezra is focused on detecting prostate cancer through full-body MRIs that are then analyzed by artificial intelligence. Ezra’s subscription-based model offers an MRI and access to medical staff and support at $999 for one year.

The full-body MRI is a huge change compared to the most prevalent detection method for a cancer that kills 1 in 41 men: getting a biopsy, which is painful, uncomfortable, and can have unpleasant side effects.

Magnetic resonance imaging, on the other hand, eliminates the discomfort and is more accurate than biopsies or blood tests. MRIs, however, are not without their costs: about $1,500 if an individual books one himself, sans Ezra. And then there’s the time a radiologist needs to scan it.

Radiologist vs AI detection of cancer in MRI scans  (Courtesy Ezra)

Radiologist vs AI detection of cancer in MRI scans (Courtesy Ezra)

“We’re trying to make MRI-based cancer screening affordable,” says Emi. “The cost of getting screened with MRI has two parts: the scanning and the analysis. You need to get a scan, and a radiologist needs to analyze the scan and provide a report based on the scan. Those two things each have a cost associated with them. We’re using AI to help drive the costs down by assisting the radiologist, and by accelerating the MRI scanning time.”

The first thing a radiologist does is make a bunch of manual measurements of the organ in question — in Ezra’s case, the size of the prostate. If you have an enlarged prostate, you have a higher likelihood of having cancer. If there’s a lesion like a tumor in the prostate, radiologists need to measure the size and location of the tumor. They need to segment the tumor so they can make a 3D model so the urologist knows what to focus on. All of those measurements and annotations are currently done manually, which makes up about half of a radiologist’s workload.

“What we’re focusing on is using the Ezra AI to automate all of those trivial, manual tasks, so the radiologist can spend less time per scan doing manual things, and instead focus on analysis and reporting. That will potentially make them more accurate, as well.”

For the future, says Emi, the team is already considering how to use AI to accelerate the scanning process as well.

“The reason this is possible now and it wasn’t possible before is that deep learning has given us the ability to be as good or better than humans at these things, which means it’s now feasible to create these types of technologies and implement them into the clinical workflow.”

THE SEED OF A NEW IDEA: PLANT IT BEFORE YOU’RE READY

While it looks like Emi has seamlessly gone from one successful venture to the next, the reality is a lot more nuanced.

It was while he was still running Brainient, before it was acquired, that he started plotting his next move. In 2015, Emi was introduced to Hospices of Hope in Romania, which builds and operates hospices that care for terminally ill cancer patients. During his visits with doctors and patients, the seed of Ezra was born.

Cancer struck a personal chord. As a child, Emi had developed hundreds of moles on his body, which put him at very high risk of melanoma. He started getting screened and going to dermatologists regularly from the age of 10 years onwards to make sure they weren’t cancerous. While he hasn't yet had any maligned lesions, he’s experienced the discomfort of biopsies firsthand, and he’s always been very conscious about the importance of screening.

“I realized while speaking to doctors that one of the biggest problems is that people get detected late. I started looking into that and realized that [this is] because there’s no fast, affordable, accurate way to screen cancer anywhere in the body,” says Emi. From there, he began researching different ways in which you can screen for cancer. A computer scientist by education, he spent two years on what he calls, “learning and doing an accelerated undergrad in oncology, healthcare, genetics and medical imaging.”

That accelerated educated is supplemented with an incredibly impressive team. It’s no surprise that Ezra cofounder Diego Cantor is equally curious and skilled, and brings an enormous technical repertoire to the table: an undergraduate education in Electronic Design and Automation Engineering, a master’s degree in Computer and Systems Engineering, a PhD in Biomedical Engineering (application of machine learning to study epilepsy with MRI), and Post-doctoral work in the application of deep learning to solve medical imaging problems. The scientific team is rounded out with deep technical experts: Dr. Oguz Akin (professor of radiology at Weill Cornell Medicine and a radiologist at Memorial Sloan-Kettering Cancer Center), Dr. Terry Peters, director of Western University’s Biomedical Imaging Research Centre), Dr. Lawrence Tanenbaum (VP and Medical Director Eastern Division, Director of MRI, CT and Advanced Imaging at RadNet), and Dr. Paul Grewal (best-selling author of Genius Foods).

Ezra’s deeply technical team trained AI with data sets from the National Institute of Health marked up by radiology experts. On new data sets, the Ezra was 90% accurate at agreeing with the experts.

Ezra’s deeply technical team trained AI with data sets from the National Institute of Health marked up by radiology experts. On new data sets, the Ezra was 90% accurate at agreeing with the experts.

LEARNING (AND STARTUP SUCCESS) IS ALL ABOUT COURSE CORRECTING

As a lifelong learner who actively chronicles his year-long attempts to gain new skills and habits, Emi has picked up a thing or two about doing new things for the first time.

“Learning anything of material value is really, really hard,” says Emi, who’s done everything from training his memory with a world memory champion to hitting the gym every single day for one year.

This focus on the process and being comfortable with being uncomfortable came in handy when Emi, who studied computer science and applied mathematics in college, pivoted into cancer detection.

Emi cycled through twelve potential ideas before deciding on Ezra’s current technology. At every turn, he researched methodically and consulted with experts.

One of the promising ideas that Emi considered — shelved for now — is DNA-based liquid biopsies. “We’re at least a decade away from DNA-based liquid biopsies being feasible and affordable,” says Emi, who was searching to make an immediate impact.

Emi had just sold Brainient and was on his honeymoon when he came up with the winning idea in November 2016. “I had this idea: what if you could do a full-body MRI and then use AI to lower the cost of the full-body MRI, both in terms of scanning time as well as analysis in order to make a full-body MRI affordable for everybody? My wife loved the idea, and that’s always a good sign.”

In January 2017, Emi discovered a research paper that was comparing the current way to screen for prostate cancer — a Prostate-Specific Antigen (PSA) blood test followed by a biopsy — with MRIs as an alternative. “An MRI was by far more accurate and a much better patient experience, and so I was like, this is it. It can work. And that’s how Ezra was born.”

THE FUTURE

Emi has big plans for Ezra going forward, and this year’s LDV Vision Summit is one step in that direction. He hopes to meet people working in the vision tech space, particularly within healthcare.

Although Ezra has been live just since January 7th of this year — 20 people were scanned in the first three weeks — its early results are very promising.

“The first person we scanned had early-stage prostate lesions. That really makes you wake up in the morning and go at it,” says Emi.

Out of the first 20 scanned, three had early-stage prostate lesions they were unaware of. Two early users came in with elevated PSA levels, but the MRIs showed no lesions, obviating the need to do prostate biopsies.

The long-term potential for Ezra — going beyond prostate screenings — is also clear.

“We helped one man who thought he was dying from cancer find out that he had no cancer but that he was likely an undiagnosed diabetic,” says Emi. “We gave him the information for his urologist and physician to make that diagnosis. We checked with him a month later and he’s over the moon happy. We helped him get peace of mind that he doesn’t have cancer, as well as diagnose a disease he wasn’t aware he had.”

Even though these results have been powered by AI and MRIs, Emi is emphatic that Ezra is not an AI company.  “We want to help people detect cancer early...and we will build any technology necessary. We think of ourselves as a healthcare company leveraging AI, not the other way around,” he says.

While Ezra’s current focus is prostate cancer, expanding to other cancers that affect women is on the horizon. After all, as Emi points out, “Women are the most preventative-focused type of individual, for themselves and their families.” To underscore that point, Emi says that many of the early adopters for prostate screenings have come at the encouragement of men’s female partners.

“The way we are approaching expansion is based on cancer incidence. We’re starting with the cancers that are most prevalent across society, with prostate being one of them, breast, lungs, stomach, and our ultimate goal is to be able to do one scan a year and find cancer in any of those organs.

“Our ultimate goal is to do a full body MRI analyzed by AI and find any and all cancer. In a decade, I hope we will have gotten there.”

If you’re building a unique visual tech company, we would love for you to join us. At LDV Capital, we’re focused on investing in deep technical people building visual technology businesses. Through our annual LDV Vision Summit and monthly community dinners, we bring together top technologists, researchers, startups, media/brand executives, creators and investors with the purpose of exploring how visual technologies leveraging computer vision, machine learning and artificial intelligence are revolutionizing how humans communicate and do business.

Current Artificial Neural Networks Are Suboptimal Says DeepMind Scientist

Viorica Patraucean, LDV Vision Summit 2018 ©Robert Wright

Viorica Patraucean, LDV Vision Summit 2018 ©Robert Wright

Early Bird tickets now available for our LDV Vision Summit 2019 - May 22 & 23 in NYC at the SVA Theatre.  80 speakers in 40 sessions discuss the cutting edge in visual tech. Register now!

Viorica Patraucean is a Research Scientist at Google DeepMind. At our Vision Summit 2018 she enlightened us with her recent work on massively parallel video nets and how it’s especially relevant for real world low-latency/low-power applications. Previously she worked on 3D shapes processing in the Machine Intelligence group of the Engineering Department in Cambridge, after completing a PhD in image processing at ENSEEIHT–INP Toulouse. Irrespective of the modality - image, 3D shape, or video - her goal has always been the same: design a system that comes closer to human perception capabilities.

As most of you here, I'm interested in making machines see the world the way we see it. When I say machines, I'm thinking of autonomous cars or robots or systems for augmented reality. These are all very different applications, of course, but, in many cases, they have one thing in common, they require low latency processing of the visual input. In our work, we use deep artificial neural networks which consist of a series of layers. We feed in an image, this image is processed by each layer of the network, and then we obtain a prediction, assuming that this is an object detector, and there's a cat there. We care about cats and all.

Just to make it clear what I mean by latency – I mean the time that passes between the moment when we feed in the image and the moment when we get the prediction. Here, obviously, the latency is just the sum of the computational times of all the layers in the network.

Now, it is common practice that, if we are not quite happy with the accuracy of the system, we can make the systems deeper by adding more layers. Because this increases the capacity of the network, the expressivity of the network, we get better accuracy. But this doesn't come for free, of course. This will lead to increasing the processing time and the overall latency of the system. Current object detectors run at around five frames per second, which is great, of course, but what does five frames per second mean in real world?

I hope you can see the difference between the two videos here. On the left, you see the normal video at 25 frames per second and, on the right, you see the five frames per second video obtained by keeping every fifth frame in the video. I can tell you, on the right, the tennis ball appears in two frames, so, if your detector is not perfect, it might fail to detect it. Then you're left to play tennis without seeing the ball, which is probably not ideal.

The question then becomes, how can we do autonomous driving at five frames per second, for example? One answer could be like this, turtle power. We all move at turtle speed, but probably that's not what we are after, so then we need to get some speed from somewhere.

One option, of course, is to rely on hardware. Hardware has been getting faster and faster in the past decade. However, the faster hardware normally comes with higher energy consumption and, without a doubt, on embedded devices, this is a critical constraint. So, what would be more sustainable alternatives to get our models faster?

Let's look at what the brain does to process a visual input. There are lots of numbers there. Don't worry. I'll walk you through them. I'm just giving a list of comparison between a generic artificial neural network and the human brain. Let's start by looking at the number of basic units, which in the brain are called neurons and their connections are called synapses.

Here, the brain is clearly superior to any model that we have so far by several orders of magnitude, and this could explain, for example, the fact that the brain is able to process so many things in parallel and to achieve such high accuracy. However, when we look at speed of basic operation, here we can see that actually our electronic devices are much faster than the brain, and the same goes for precision of computation. Here, again, the electronic devices are much more precise.


“Current systems consider the video as a collection of independent frames. And, actually, this is no wonder since the current video systems were initially designed as image models and then we just run them repeatedly on the frames of a video.”


However, as I said, speed and precision of computation normally come with high power consumption to the point where like a current GPU will consume about 10 times more than the entire human brain so. Yet, with all these advantages on the side of the electronic devices, we are still running at five frames per second when the human brain can actually run at more. The human brain can actually process more than 100 frames per second, so this points to the fact that.

I'm going to argue here that the reason for this suboptimal behavior comes from the fact that current systems consider the video as a collection of independent frames. And, actually, this is no wonder since the current video systems were initially designed as image models and then we just run them repeatedly on the frames of a video. By running them in this way, it means that the processing is completely sequential. Except, the processing that happens on GPU where we can parallelize things. Overall, it still remains sequential, and then, the older layers in the network, they all work at the same pace, and this is opposite to what the brain does.

©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

There is a high evidence that the brain actually exhibits a massively parallel processing mode and also that the neurons fire at different frame rates. All this because the brain rightfully considers the visual stream as a continuous stream that exhibits high correlations and redundancy across time.

Just to go back to the initial sketch, this is how our current systems work. You get an image. This goes through every layer in the network. You get a prediction. The next image comes in. It goes again through every layer and so on. What you should observe is that, at any point in time, only one of these layers are working and all the others are just waiting around for their turn to come.

This is clearly not useful. It's just wasting resources, and the other thing is that everybody works at the same pace, and, again, this is not needed if we take, for example, in account the slowness principle, and I'm just trying to depict here what that means. This principle informally states that fast varying observations are explained by slow varying factors. 

If you look at the top of the figure on the left - those are the frames of a video depicting a monkey. If you look at the pixel values in the pixel space, you will see high variations because of some light changes or the camera moves a bit or maybe the monkey moves a bit. However, if we look at more abstract features of the scene, for example, the identity of the object of the position of the object, this will change much more slowly.

Now, how is this relevant for artificial neural networks? It is quite well-understood now that deeper layers in an artificial neural network extract more and more abstract features, so, if we agree with the slowness principle, then it means that the deeper layers can work at a slower pace than the layers that are the input of the network.

Now, if we put all these observations together, we obtain something like this. We obtain like a Christmas tree, as shown, where all the layers work all the time, but they work at different rates, so we are pipelining operations, and this generates more parallel processing. We can now update our layers at different rates.

Initially, I said that the latency of a network is given by the sum of the computation times of all the layers in the network. Now, very importantly, with our design, the latency is now given by the slowest of the layers in the network. In practice, we obtain up to four times faster response. I know it's not the 10 times, but four is actually enough because, In perception, once you are past the 16 frames per second, then you are quite fine, I think.

We obtain this faster response with 50% less computation, so I think this is not negligible and, again, very important, we can now make our networks even deeper to improve their accuracy without affecting the latency of the network.

I hope I convinced you that this is a more sustainable way of creating a low latency video models, and I'm looking forward to the day where our models will be able to process everything that the camera can provide. I'm just showing here a beautiful video captured at 1,000 frames per second, I think this is the future.

Thank you.

Watch Viorica Patraucean’s keynote at our LDV Vision Summit 2018 below and checkout other keynotes on our videos page.

Early Bird tickets are now available for the LDV Vision Summit May 22 & 23, 2019 in NYC to hear from other amazing visual tech researchers, entrepreneurs and investors.

We are accepting applications to our Vision Summit Entrepreneurial Computer Vision Challenge for computer vision research projects and our Startup Competition for visual technology companies with <$2M in funding. Apply now & spread the word.

LDV Capital is Looking for Summer Analysts - Come Collaborate with Us!

LDV_20180523_0153 (1).jpg

We at LDV Capital are currently recruiting our analyst interns for Summer 2019 in NYC.

LDV Capital is a thesis-driven early stage venture fund investing in people building visual technology businesses that leverage computer vision, machine learning and artificial intelligence to analyze visual data.

We are looking for two entrepreneurial minded, visual tech savvy individuals to join our  team from May - August 2019. We are looking for:

  • Analyst intern who has experience with startups or venture capital. Interested in learning more about venture capital, market mapping, investment research, due diligence and how to build a successful startup.

  • Technical Analyst intern with a deep-seated interest in computer vision, entrepreneurship and venture capital. Looking to learn more about venture capital, business operations, due diligence and how to run a successful start up.

We are a small team and work closely with our summer interns on many aspects of building startups and investing. Our goals are to introduce our interns to the evaluation process of teams, technology and businesses.  We help them connect and collaborate with entrepreneurs across the globe, and to lead them on a deep dive into sectors being disrupted by visual technologies. We work to make our internships unique in three ways:


1. We expose our interns to the leading edge visual technologies that will disrupt all industries and society in the next 20 years.

We invest in companies that are working to improve the world we live in with visual technology. As such, our horizontal thesis drives us to look at and invest in any and all industries where deep technical teams are using computer vision, machine learning and artificial intelligence to solve a critical problem.

Since our interns have a seat at the table with our deal flow - you are pushed to educate yourself on numerous industries and applications of visual tech in order to understand and evaluate the value proposition of the early stage startups coming through the pipeline.

At LDV Capital you could be reviewing the pitch deck of a visual tech company in agriculture in the morning and sitting in on a call with a visual tech healthcare startup in the afternoon.

You will be asked to develop and present market trends, competitive landscapes, business opportunities and more for cutting edge, early stage technology companies. You will be consulted for your valuable opinions on those companies and technologies during weekly deal flow meetings.

While it is challenging work, the versatile knowledge of visual technologies and applications to many industries you develop over the course of the summer are applicable to almost any opportunity you pursue after your internship LDV.

“My summer internship with LDV Capital provided a unique opportunity to interact with countless visual tech entrepreneurs while experiencing the excitement of early-stage investing. Most of all, the experience was a front row seat to the newest sensing, display, and automation trends which underpin my life goals and which will revolutionize the world as we know it.”
-
Max Henkart, Summer Analyst 2018

Max  is currently exploring multiple robotics spin-outs, consulting with camera development/supplier management teams, considering full time roles in VC/CVC/Self-Driving firms, and graduating from CMU’s MBA program in May 2019. ©Robert Wright

Max is currently exploring multiple robotics spin-outs, consulting with camera development/supplier management teams, considering full time roles in VC/CVC/Self-Driving firms, and graduating from CMU’s MBA program in May 2019. ©Robert Wright

2. We empower our interns to own their own projects.

Whether you are conducting a market landscape review, investigating a unique application of computer vision, or doing a trend analysis, we want you to own it. We are here to help guide you on planning, setting milestones, creating materials, presenting your work, but we believe in “teaching you to fish.”

“I really enjoyed working with the LDV team, and I learned a lot from the experience. LDV gave me a lot of responsibility, and I was able to learn what it is like to work as a venture capitalist. There is no better way to learn about entrepreneurship and venture capital.”
-
Ahmed Abdelsalam, Summer & Fall Analyst 2018

Ahmed  is currently completing the final semester of his MBA at the University of Chicago, Booth School of Business ©Robert Wright

Ahmed is currently completing the final semester of his MBA at the University of Chicago, Booth School of Business ©Robert Wright

There is no bigger example than our annual LDV Insights research project, where we deep dive into a sector or trend with prime opportunities for visual technology businesses. Our interns contribute to the project plan, conduct the research, interview experts, analyze the data, write the slides, and are named authors in the publication.

In 2017, our research found that “45 Billion Cameras by 2022 Fuel Business Opportunities” and it was published by Interesting Engineering and others.

LDV Capital - 5 Year Visual Tech Market Analysis 2017.005.jpeg

In 2018, we identified “Nine Trends Where Visual Technologies Will Improve Healthcare by 2028” and published it on TechCrunch.

As one facet of your internship, 2019 Summer Analysts will be working on our third annual LDV Insights report on a very exciting, immensely growing sector of the economy. In your application, let us know what you think the sector for our 2019 Insights report will be.

“Interning for LDV was truly one of the most rewarding experiences in my career thus far. Working in the smaller environment allowed me to work closely with the GP and gain insight into the VC process. Unlike some other busy-work dominated internships, LDV provided an opportunity to own my own project, develop a research report that was ultimately published by the firm.”
-
Sadhana Shah, Summer Analyst 2017

Sadhana  is currently finishing her final semester at NYU Stern School of Business, with a double major in Management and Finance with a minor in Social Entrepreneurship. After graduation, she will be joining the KPMG Innovation Lab as an Associate. ©Robert Wright

Sadhana is currently finishing her final semester at NYU Stern School of Business, with a double major in Management and Finance with a minor in Social Entrepreneurship. After graduation, she will be joining the KPMG Innovation Lab as an Associate. ©Robert Wright

3. We provide opportunities to network with startups, technologists & other investors.

At LDV Capital, you don’t get stuck behind a desk all day, every day. Our interns kick off their summer with our sixth annual LDV Vision Summit which has about 600 attendees, 80 speakers, 40 sessions, 2 competitions over 2 days. Interns also help and attend our LDV Annual General Partners Meeting.  Your second week of the internship looks like this:

  • Monday - Help facilitate the subfinals for our Startup Competition and Entrepreneurial Computer Vision Challenge, watching the pitches of +40 competitors and hearing the feedback and evaluation by expert judges.

  • Tuesday - Assemble aspects of our annual report and attend our Annual General Meeting for investors as well as a dinner for our investors, portfolio companies & expert network.

  • Wednesday - attend our LDV Vision Summit, listening to keynotes about state of the art visual technologies. Attend our VIP Reception for all our speakers & sponsors.

  • Thursday - second day of our LDV Vision Summit.

The rest of the summer is filled with opportunities to attend our gender-balanced LDV Community dinners, meet with startups, go to industry events, watch pitch competitions and more.

“Spending time at LDV Capital was an unforgettable experience. I’m thankful for access to A+ investors and entrepreneurs, collaborating with a world class team and a front row introduction to VC."
-
Danilo Vicioso, Summer Analyst 2018

Danilo  is currently an EIR at Prehype, a corporate innovation and venture studio behind startups like Ro, ManagedByQ, BarkBox and AndCo. ©Robert Wright

Danilo is currently an EIR at Prehype, a corporate innovation and venture studio behind startups like Ro, ManagedByQ, BarkBox and AndCo. ©Robert Wright

Apply before Feb 28, 2019 for consideration.

If you believe you have the skills, experience and motivation to join our team and would like to gain more knowledge over the summer in:

  • Computer vision, machine learning and artificial intelligence

  • Market mapping

  • Investment research

  • Startup due diligence

  • Startup operations

  • Technical research

  • Trend analysis

  • Data analysis

  • Networking with entrepreneurs, other investors & technologists

Read carefully through everything you can find out about us online and then submit a concise application showcasing why you are a great fit for the opportunity by February 28.

We carefully consider all applications and will get back to you ASAP. Thanks!

Apply now.


Rebecca Kaden of USV Discusses Trends in Visual Tech Investing

At the LDV Vision Summit 2018, Rebecca Kaden of Union Square Ventures shared her insights on investing at the intersection of standout consumer businesses and vertical networks.

Rebecca and Evan Nisselson of LDV Capital discussed the greatest challenges they have seen their portfolio companies endure and how strategic team building is critical to success at the earliest stages of company building. Watch their chat here:

Our Sixth Annual LDV Vision Summit will be May 22 & 23, 2019 in NYC. Early bird tickets are currently on sale. Sign up for our newsletter to receive news, updates and discounts.

Thank You for Making Our 5th Annual LDV Vision Summit a Success!

Day 1 Fireside Chat with Eric Fossum, CMOS Image Sensor Inventor and Dartmouth,&nbsp;Professor.&nbsp;&nbsp;©Robert Wright/LDV Vision Summit 2018

Day 1 Fireside Chat with Eric Fossum, CMOS Image Sensor Inventor and Dartmouth, Professor.  ©Robert Wright/LDV Vision Summit 2018

Our fifth annual LDV Vision Summit was fantastic, a giant thank you to everyone who came and participated in making it another spectacular gathering.

We couldn’t do it without you, YOU are why our annual LDV Vision Summit is special and a success every year. Thank you!

We are honored that you fly in from around the world each year to share insights, inspire, do deals, recruit, raise capital and help each other succeed!  

Congratulations to our competition winners:
- Startup Competition:  MoQuality, Shauvik Roy Choudhary, Co-Founder & CEO
- Entrepreneurial Computer Vision Challenge: “Flatcam”, Jesse Adams, Rice University, PhD Candidate

“If you are a startup that is trying to create a breakthrough technology you have to be here. If you are investor that wants to look at interesting technologies and startups you have to be here. This is the place for computer vision.” Shauvik Roy Choudhary - CEO & Co-Founder of MoQuality.

“LDV hosts an event that stands out in the sea of conferences by its focus, quality, and ability to attract exceptional entrepreneurs and investors all deeply curious and immersed in visual technology.” Rebecca Kaden - Union Square Ventures, Partner.

“I go to a lot of academic conferences and this to me feels like a really fresh, different type of conference. It’s a great mix of industry, startups, investors; that mix of people brings a totally different energy and dialogue than the academic conferences I’m used to going to. In terms of the conferences I attend in a year, this is a nice change of pace for me." Matt Uyttendale - Facebook, Director of Core AI in the AR camera group.

A special thank you to Rebecca Paoletti of CakeWorks and Serge Belongie of CornellTech as the summit would not exist without collaborating with them!

Matt Uyttendale, Facebook, Director of Core AI, Facebook Camera &nbsp;©Robert Wright/LDV Vision Summit 2018

Matt Uyttendale, Facebook, Director of Core AI, Facebook Camera  ©Robert Wright/LDV Vision Summit 2018

“What I like about this is it has both focus on the visual area, but a lot of different aspects and a lot of different stakeholders with very different perspectives, so you kind of get a multi-factorial view of the field.” Daniel Sodickson - NYU School of Medicine, Vice Chair Research, Dept. Radiology.

I work quite deeply in my technical research silo in next-generation image sensor technology.  The Vision Summit was a great chance to hear what the rest of the vision community is thinking about and working on, in an effective and condensed format, and to meet and discuss those interesting topics in person.” Eric Fossum - Dartmouth, Professor & CMOS Image Sensor Inventor.

"The LDV summit is the definitive gathering of investors and entrepreneurs in the machine vision world.  And, thanks to Evan and his team, it's a heck of a lot of fun." Nick Rockwell - New York Times, CTO.

“The LDV Vision Summit gave an intimate and powerful look at how computer vision will change almost every sector of the economy. As a native New Yorker, I'm happy that one of my favorite events of the year brought me out of Silicon Valley and back to NYC.” Zach Barasz - BMW i Ventures, Partner.

“The summit provided us with a unique opportunity to hear from and engage with industry influencers, academic thought-leaders, investors and fellow entrepreneurs.  Getting a holistic view of emerging trends from the entire vision ecosystem is incredibly valuable and will enable us to better anticipate and navigate our road ahead.” Yashar Behzadi - Neuromation, CEO.

“I think it’s a very vibrant community and the diversity of people from academia to those in the startup world and those who have already done startups and research is an interesting mix. As well as a set of investors who are dedicated to bringing these visual technologies out there to the real world.“ Spandana Govindgari - Co-Founder of HypeAR.

Panel Day 1: Trends in Visual Technology Investing [L to R]&nbsp;Polina Marinova, Fortune Magazine, Zach Barasz, BMWi Ventures, Jenny Lefcourt, Freestyle Capital &nbsp;©Robert Wright/LDV Vision Summit 2018&nbsp;

Panel Day 1: Trends in Visual Technology Investing [L to R] Polina Marinova, Fortune Magazine, Zach Barasz, BMWi Ventures, Jenny Lefcourt, Freestyle Capital  ©Robert Wright/LDV Vision Summit 2018 

LDV 5-23-2018 - Day 1 - Panel 1 - AR beyond hot dogs.jpg

“If you are interested in computer vision, whether you’re doing research, whether you’re building a company or investing in computer vision companies, I’d say this is the place to be! Everybody here is interested and passionate about computer vision.” Dan Ryan - VergeSense, CEO & Co-Founder.

“I came here because, as a research engineer, this is a unique opportunity to get a perspective on the whole ecosystem. You get to see startups, you get to see people who are non-technical, and you get to see students. So for me it’s very interesting to get to see the whole ecosystem and get a sense of what the needs of the community are.” Raghu Krishnamoorthi - Google, Software Engineer, Tensorflow for Mobile.

“LDV Vision Summit is a great event with thought provoking talks. Rarely do you find an event with each talk as riveting as the next - I look forward to attending the next one.” Elizabeth Mathew - Columbia University, MBA Candidate.

“If you are at all curious about the importance of computer vision, or vision in general, it’s important for you to understand how the technology works and where its applications lie, because we are moving in a direction where computer vision is going to be incredibly important in all applications of life. The LDV Vision Summit is a great conference to learn the different facets of it.” Steve Kuyan - NYU Future Labs, Managing Director.

“It was a great day, I was very impressed with the presentations and the competitions; it’s a fun combination of people from academics and industry…overall I really enjoyed it." Michael Rubinstein - Senior Research Scientist, Google.

Jason Eichenholz, Luminar Technologies, CTO &amp; Co-Founder ©Robert Wright/LDV Vision Summit 2018

Jason Eichenholz, Luminar Technologies, CTO & Co-Founder ©Robert Wright/LDV Vision Summit 2018

Ying Zheng, AiFi, Chief Science Officer &amp; Co-Founder ©Robert Wright/LDV Vision Summit 2018

Ying Zheng, AiFi, Chief Science Officer & Co-Founder ©Robert Wright/LDV Vision Summit 2018

“Some of the technical depth that came into some of the presentations was unbelievable…it’s a diverse group of people from executives, to researchers, and everyone in between.” Anthony Sarkis, Computer Vision Researcher and Entrepreneur.

“Seeing all the latest developments in machine learning, and all of these applications, was really impressive… it’s a great mix of things you cannot find in any other classic academic conference, a combination of the business side and the science side of computer vision.” Anastasia Yendiki - Assistant Professor of Radiology, Harvard Medical School.

“Awesome energy… it brings together a lot of people from research, industry, and startups; it’s a great, fun experience with lots of networking opportunities.”  Ryan Benmalek - Ph.D. Student, Cornell University.

“This is one of the best places to learn about what is happening in the vision world, to connect AI with investors, and to think about what the solutions of the future are going to be…it's an interesting place with a broad range of actors in the space who can talk about what they are working on, and exchange ideas.” Renaud Visage - CTO & Co-Founder of Eventbrite.

Anthony Johnson, Giphy, CTO &nbsp;©Robert Wright/LDV Vision Summit 2018

Anthony Johnson, Giphy, CTO  ©Robert Wright/LDV Vision Summit 2018

Fireside Chat Day 1: Rebecca Kaden, Union Square Ventures, Partner &nbsp;©Robert Wright/LDV Vision Summit 2018

Fireside Chat Day 1: Rebecca Kaden, Union Square Ventures, Partner  ©Robert Wright/LDV Vision Summit 2018

Assaf Glazer, Nanit, CEO &nbsp;©Robert Wright/LDV Vision Summit 2018

Assaf Glazer, Nanit, CEO  ©Robert Wright/LDV Vision Summit 2018

“If you want to learn about anything in the visual space, with a technical analysis not just high-level lay-up questions, this is where you go.” Trace Cohen, Managing Director, NY Venture Partners

“This is a very unique conference...and I go to conferences frequently. Something here in the way that its organized, the talent that they bring and the cadence of that.…this is an inspiring community for experts and for people that are interested in computer vision technology and the content around it.” Assaf Glazer - CEO & Co-Founder of Nanit.

“Great event, super high quality, very focused and targeted. Most events even if they are high quality are much broader in nature, so at LDV you’re able to go very deep. I’d highly recommend this event to anyone interested in this technology.” Ophir Tanz - CEO & Co-Founder of GumGum.

“The event is inspiring. Ted Talks meets computer vision meets the future. But also it’s a family. People are here to help each other grow and that makes this unique from other events. ” Brian Brackeen - CEO & Co-Founder of Kairos.

“Vision summit is a great confluence of researchers, practitioners and visionaries talking about topics which are related but don’t often get put into the same room and probably should be more often.” Anthony Johnson, CTO of Giphy.

“If you want a set of like-minded people who are really really trying to push the boundaries in computer vision, this is the place to be.” Inderbir Sidhu, CTO TVision Insights.

“The energy at the show is fantastic. There’s such a cross-disciplinary field for [computer vision] so we’re able to talk to both the researchers and the investors who help accelerate moving the technology forward.” Jason Eichenholz, CTO & Co-Founder of Luminar Technologies.

Panel Day 2: How Will Technology Impact Our Trust in Visual Content? [L to R]&nbsp;Moderator: Jessi Hempel, Wired, Senior Writer; Panelists: Nick Rockwell, New York Times, CTO and Karen Wickre, KVOX Media, Founder.&nbsp;&nbsp;©Robert Wright/LDV Vision Summit 2018

Panel Day 2: How Will Technology Impact Our Trust in Visual Content? [L to R] Moderator: Jessi Hempel, Wired, Senior Writer; Panelists: Nick Rockwell, New York Times, CTO and Karen Wickre, KVOX Media, Founder.  ©Robert Wright/LDV Vision Summit 2018

Learn more about our partners and sponsors:

Organizers:
Presented by Evan Nisselson & Abby Hunter-Syed, LDV Capital
Video Program: Rebecca Paoletti, CakeWorks, CEO
Computer Vision Program: Serge Belongie, Cornell Tech
Competitions Subfinal Judges: Rob Spectre, Brooklyn Hacker; Alexandre Winter, Netgear; Andy Parsons, WorkFrame; Jan Erik Solem, Mapillary
Universities: Cornell Tech, School of Visual Arts
Sponsors: Coatue Management, Facebook, GumGum, Adobe, ImmerVision, Neuromation, Google Cloud
Media Partners: Kaptur, VizWorld, The Exponential View
Coordinator Entrepreneurial Computer Vision Challenge: Ryan Benmalek, Cornell University, Doctor of Philosophy Candidate in Computer Science

CakeWorks is a full-service video agency and content studio that helps businesses launch better video experiences, grow viewership and increase revenue. Stay in the know with our  weekly newsletter, or follow us @cakeworksvideo #videoiscake  

Cornell Tech is a revolutionary model for graduate education that fuses technology with business and creative thinking. Cornell Tech brings together like-minded faculty, business leaders, tech entrepreneurs and students in a catalytic environment to produce visionary ideas grounded in significant needs that will reinvent the way we live.

Fireside Chat Day 2:&nbsp;Amol Sarva, Knotel, CEO &amp; Co-Founder &nbsp;©Robert Wright/LDV Vision Summit 2018

Fireside Chat Day 2: Amol Sarva, Knotel, CEO & Co-Founder  ©Robert Wright/LDV Vision Summit 2018

Keynote Day 1: Anastasia Yendiki, Harvard Medical School, Assistant Professor of Radiology ©Robert Wright/LDV Vision Summit 2018

Keynote Day 1: Anastasia Yendiki, Harvard Medical School, Assistant Professor of Radiology ©Robert Wright/LDV Vision Summit 2018

Coatue Management, L.L.C, founded by Philippe Laffont, is a technology focused global investment manager with offices in New York, Menlo Park, San Francisco and Hong Kong.  Coatue launched in 1999 and currently has ~$16 billion in assets under management through public and private investments.

Facebook’s mission is to give people the power to share and make the world more open and connected. Achieving this requires constant innovation. Computer Vision researchers at Facebook invent new ways for computers to gain a higher level of understanding cued from the visual world around us. From creating visual sensors derived from digital images and videos that extract information about our environment, to further enabling Facebook services to automate visual tasks. We seek to create magical experiences for the people who use our products.

GumGum is an artificial intelligence company with a focus on computer vision. Our mission is to unlock the value of visual content produced daily across diverse data sets. We teach machines to see in order to solve hard problems. Since 2008, the company has applied its patented capabilities to serve a variety of industries from advertising to professional sports, with more to come.

Adobe is the global leader in digital marketing and digital media solutions. Our tools and services allow our customers to create groundbreaking digital content, deploy it across media and devices, measure and optimize it over time and achieve greater business success. We help our customers make, manage, measure and monetize their content across every channel and screen.

ImmerVision are experts in Intelligent Vision and wide-angle imagery for professional, consumer, and industrial applications.  ImmerVision technology combines advanced lens design, innovative Data-in-Picture marking, and proprietary image processing to provide an AI-ready machine vision system to OEMs, ODMs, and the global imaging ecosystem.

Neuromation is a distributed synthetic data platform for Deep Learning Applications.

The Google Cloud Startup Program is designed to help startups build and scale using Google Cloud Platform.  We are a small team with startups in our DNA. We appreciate what makes early-stage companies tick, and we think that Google Cloud’s continued success over the next decade will be fueled by great companies yet to be born.  But like you, we’re mostly here because startups are challenging and fun and we wouldn’t have it any other way.

Jesse Adams, Winner of ECVC 2018 for "Flatcam", Rice University, PhD Candidate &nbsp;©Robert Wright/LDV Vision Summit 2018

Jesse Adams, Winner of ECVC 2018 for "Flatcam", Rice University, PhD Candidate  ©Robert Wright/LDV Vision Summit 2018

Kaptur is the first magazine about the visual tech space. News, research and stats along with commentaries, industry reports and deep analysis written by industry experts.

Exponential View is one of the best newsletters about the near future of technology and society. Azeem Azhar critically examines the fast pace of change, and its deep impact on the economy, culture, and business. It has been praised by the former CEO of Reuters, founder of Wired, and the deputy editor of The Economist, among others.

LDV Capital invests in deep technical teams building visual technology businesses.

Mapillary is a community-based photomapping service that covers more than just streets, providing real-time data for cities and governments at scale. With hundreds of thousands of new photos every day, Mapillary can connect images to create an immersive ground-level view of the world for users to virtually explore and to document change over time.

The MFA Photography, Video and Related Media Department at the School of Visual Arts is the premiere program for the study of Lens and Screen Arts. This program champions multimedia integration, interdisciplinary activity, and provides ever-expanding opportunities for lens-based students.


Didn't get a chance to tell us what you are working on at the Summit? Get in touch & let us know about your recent research or startup that you are working on.


VizWorld.com covers news and the community engaged in applied visual thinking, from innovation and design thinking to technology, media and education. From the whiteboard to the latest OLED screens and HMDs, from visual UX to movie making and VR/AR/MR, VizWorld readers want to know how to put visual thinking to work.

AliKat Productions is a New York-based event management and marketing company: a one-stop shop for all event, marketing and promotional needs. We plan and execute high-profile, stylized, local, national and international events, specializing in unique, targeted solutions that are highly successful and sustainable. #AliKatProd

Robert Wright Photography clients include Bloomberg Markets, Budget Travel, Elle, Details, Entrepreneur, ESPN The Magazine, Fast Company, Fortune, Glamour, Inc. Men's Journal, Newsweek (the old one), Outside, People, New York Magazine, New York Times, Self, Stern, T&L, Time, W, Wall Street Journal, Happy Cyclist and more…

Prime Image Media works with clients large and small to produce high quality, professional video production. From underwater video to aerial drone shoots, and from one-minute web videos to full blown television pilots... if you want it produced, they can do it.

Pam Majumdar is a Virginia Beach-based social media content writer and community growth strategist for executives and brands looking to amplify their relevance and reach. Pam combines data and creativity to shape and execute original content strategies -- often from scratch.

Celebrating MoQuality's victory in the 2018 Startup Competition [L to R] Evan Nisselson of LDV Capital, Shauvik Roy Choudhary of MoQuality, Serge Belongie of Cornell Tech with August &amp; Emilia Belongie, Abby Hunter-Syed of LDV Capital and Rebecca Paoletti of CakeWorks.&nbsp;&nbsp;©Robert Wright/LDV Vision Summit 2018

Celebrating MoQuality's victory in the 2018 Startup Competition [L to R] Evan Nisselson of LDV Capital, Shauvik Roy Choudhary of MoQuality, Serge Belongie of Cornell Tech with August & Emilia Belongie, Abby Hunter-Syed of LDV Capital and Rebecca Paoletti of CakeWorks.  ©Robert Wright/LDV Vision Summit 2018