LDV Vision Summit 2018: Agenda

 

Visual Technologies leveraging computer vision, machine learning and artificial intelligence are revolutionizing how humans communicate and do business.  

New innovations are emerging every month and our visual technology market is poised for exponential growth. As a result, companies are struggling to leverage the right solutions to help them adapt and thrive in the new world.

Join us for an interactive two-day summit to discuss trends and technologies. Meet the world’s brightest technology innovators – and learn firsthand how their visions will transform visual communication and boost or disrupt your businesses.

 

Who should attend?

» Technology executives evaluating imaging and video companies for partnerships & acquisitions
» Media and brand executives interested in boosting revenue by leveraging new imaging and video products and technologies
» Investors: Visual technologies & businesses deliver tremendous upside - don't miss the next Instagram, Youtube, Oculus, Snapchat...  
» Imaging and video startups interested in meeting investors, customers, recruiting and potential partners
» Creatives: Photographers, Videographers and anyone creating content
» Computer Vision & Artificial Intelligence experts and professionals

* Program being updated often and session times will be added end of closer to our Summit.

 

Day 1:  May 23, 2018 (8am-7pm)

Technology Deep Dive

Location: SVA Theatre, NYC

8:00 - 8:50am
Registration and Coffee




Keynote: Enabling Persistent Augmented Reality Experiences on Mobile Devices
Persistent Augmented Reality experiences are connected to, and persist, relative to places or things in the real world. Enabling these AR experiences across the spectrum of mobile devices is one of Krishnan’s missions at Facebook. There are tremendous challenges in delivering high-fidelity AR experiences to the widest array of mobile devices that understand the geometry of a scene, are able to overlay and animate 3D objects, and enable users to explore.  He will share how they are solving this and giving creators the ability to author these experiences on their platform. He will showcase specific examples from movie partnerships and also highlight future challenges and opportunities for mobile augmented reality to succeed.
-Krishnan Ramnath, Facebook, Mobile AR Tech Lead


Keynote: Camera Media May Be Bigger Than TV
Camera media such as filters, lenses and other augmented reality effects will power augmented reality experiences around the world. Marketers and brands are exploring how to reach their audiences in this new medium. Allison will share her insights on how camera marketing programs have evolved over the years, where brands are showing up in the camera today and where they aim to be tomorrow.
-Allison Wood, CameraIQ, CEO & Founder


Panel: Delivering Augmented Reality Beyond Dancing Hot Dogs
The digital and physical world will converge in order to deliver the future of augmented reality. The technology stack needed to deliver augmented and mixed reality will need several new layers of different technologies to work well together. From spatial computing with hyper-precision accuracy in multiple dimensions to filtering contextually relevant data to be displayed to the user based on location, activity, or time of day.  What are the technical challenges and opportunities in delivering the AR Cloud.
Moderator: Joshua Brustein, Bloomberg Businessweek, Correspondent
Panel: 
- Serge Belongie, Cornell Tech Professor, Computer Vision
- Ryan Measel, Fantasmo, CTO

 

- TBA


Keynote
- TBA


Keynote
- TBA


Break: Networking & Coffee



Keynote: Measuring Audience Attention Via Computer Vision & Deep Learning
The currency of advertising is based on 30 year old technology. Inderbir will share how they are collecting and annotating video for training its deep learning based computer vision models for detecting, tracking, recognizing viewers and measuring their attention levels. He will talk about the challenges and approach for applying the latest computer vision technologies for answering real world business questions today and how technology will impact the future advertising industry.
- Inderbir Sidhu, TVision Insights, CTO


Keynote: Retail Brain Using All Synthetic Data And Going After Amazon Go
Acquiring high quality data with accurate labels is both time consuming and expensive, and often not possible. Ying will share how they leverage synthetic data to create checkout-free solutions at massive scale. They create large-scale store simulations and render high quality images with pixel-perfect labels, and use them to successfully train deep learning models for camera calibration, multi-camera multi-people tracking, re-identification, and product recognition.
- Ying Zheng, AiFi, Chief Science Officer & Co-Founder


Keynote:  Synthetic data will help computer vision make the jump to the real world
Modern deep learning and artificial intelligence is data-hungry. Josh will share how computer vision will bridge the gap in applications where data collection is expensive. He will make the case for synthetic data and describe how OpenAI uses synthetic data to learn perception models for real-world robotic control. Josh is a PhD student in computer science at University of California, Berkeley and has a BA in Mathematics from Columbia University.
- Josh Tobin, OpenAI, Research Scientist


Keynote: Racing To Build Safe, Reliable and Scaleable LiDAR Technology For Autonomous Vehicles & Beyond.
Autonomous vehicles are able to "see" by leveraging a number of vision technologies to create an accurate, real-time, 3D representation of their environment. Jason will enlighten how Luminar has developed a scalable, long-range LiDAR system powerful enough to enable safe and ubiquitous autonomous driving.  He will share behind-the-scenes stories of building Luminar, their unique sensor design and the future challenges for the industry. Jason is serial entrepreneur and a pioneer in laser, optics and photonics product development and commercialization. 
- Jason Eichenholz, Luminar Technologies, CTO & Co-Founder


Panel: Advancements Of Deep Learning And Edge Computing Will Power The Internet Of Eyes
Quantizing and shrinking deep learning models down to run on-device and exciting new optical chip technology will empower the next generation of high-performance computing tasks on the edge. The Internet of Eyes empowers all inanimate objects to see. Devices such as smartphones, wearables and other devices will leverage these new advances to process locally without sending data to the cloud. These chips will need need to be fast, have low latency and have low power consumption not possible on traditional electronics. Where are the challenges, opportunities and real life use cases for these technological advances.
Moderator: Jackie Snow, MIT Technology Review  Journalist & Associate Editor, Artificial Intelligence
Panel: 
- Yichen Shen, Lightelligence CEO & Co-Founder
- Raghu Krishnamoorthi, Google, Software Engineer, Tensorflow for Mobile


Lunch



Fireside Chat: Rebecca Kaden of Union Square Ventures and Evan Nisselson of LDV Capital
Rebecca and Evan will discuss present and future investment opportunities leveraging visual technologies.
- Rebecca Kaden, Union Square Ventures, Partner
- Evan Nisselson, LDV Capital, General Partner



Break: Networking and the bar is open


Keynote: Converting Hand-Drawn Wireframes to Prototypes with Deep Learning
Artificial Intelligence is impacting all domains including the creative industry. Tony will share how Uizard is leveraging Computer Vision and Machine Learning to instantly transform any wireframe into code and revolutionize the way people build websites and apps. Tony specialized in Machine Learning during his graduate studies at the IT University of Copenhagen and ETH Zurich.
- Tony Beltramelli, Uizard, CEO & Co-Founder


Keynote: Can Brain Imaging with MRI Transform the Diagnosis of Mental Illness?
Psychiatric and neurological conditions are accompanied by changes to the wiring of the brain, i.e., to the intricate network of fiber pathways that facilitate the exchange of information between brain regions. Until recently, these pathways could only be studied post mortem by anatomists, but advances in MRI technology over the last couple of decades are now enabling us to map the wiring of the brain in living human beings. Anastasia will discuss her work on developing image analysis algorithms to map brain pathways based on MRI scans, as well as the open problems and exciting opportunities in this area. She will also share the promise that these techniques hold for characterizing mental illness based on a rich set of brain measures, with the hope of achieving earlier and more accurate diagnosis.
- Anastasia Yendiki, Harvard Medical School, Assistant Professor of Radiology


Keynote: Artificial Intelligence May Reduce Diagnostic Errors In The Trauma Setting
Entrepreneurs and scientists around the world are leveraging Artificial Intelligence to improve healthcare. Imagen is working on clinical decision support systems driven by AI. Sumit will share the unique tech, challenges, and future opportunities to leverage AI to improve standard of care for everyone. Prior to joining Imagen he was a research scientist at Facebook AI Research (FAIR) and AT&T Labs Research. He graduated with a Ph.D., in computer science from New York University under the supervision of Prof. Yann LeCun.
- Sumit Chopra, Imagen Technologies, Head of Machine Intelligence


Keynote: Massively Parallel Video Nets Power Robotic Sight
Robotics, autonomous vehicles and many other domains need to process exponential amounts of video with strict power and latency bottlenecks. This is critical for this types of use cases which need to make decisions in real-time on limited hardware. Viorica will enlighten us with her recent work on massively parallel video nets and how it’s especially relevant for real world low-latency/low-power applications. Previously she worked on 3D shapes processing in the Machine Intelligence group of the Engineering Department in Cambridge, after completing a PhD in image processing at ENSEEIHT–INP Toulouse. Irrespective of the modality -- image, 3D shape, or video -- her goal has always been the same: design a system that comes closer to human perception capabilities.   
- Viorica Patraucean, Google DeepMind, Research Scientist


Keynote: Teaching Machines to Perceive the World Like Humans
Twenty Billion Neurons's mission is to instill common sense into computers through video and to turn inanimate devices into human eyes that can understand the world around them, assist us, and ensure a safe and healthy lifestyle. Roland will share technical details, challenges and opportunities of building a large and diverse video dataset to train computers to see like humans. He will give examples of applications that leverage their technology today and how their technology will impact society in 10 years. Roland is also a Professor at University of Montreal and received his Doctor of Philosophy & Computer Science at the University of Toronto.
- Roland Memisevic, Twenty Billion Neurons. CEO, Chief Scientist & Co-Founder


Keynote: Automatically Isolating Individual Voices In Noisy Videos Description  
Videos frequently have people speaking simultaneously over each other or speaking in noisy rooms, which makes it difficult to hear individual speakers. Michael will inspire with his latest work leveraging both audio and video for isolating a single speech signal from a mixture of sounds in noisy environments. Valuable use cases for this solution range from enhancing videos, speech recognition, captioning, video conferencing, and solutions across the Internet of Eyes such as with smart assistants. Michael is Senior Research Scientist at Google, received his PhD from MIT under the supervision of Bill Freeman and spent a year as a postdoc at Microsoft Research New England.
- Michael Rubinstein, Google, Research Scientist


Closing: Congratulate Entrepreneur Computer Vision Challenge Winner & Thanks


Drinks & Networking

 

Day 2: May 24, 2018 (8am-7pm)

Business and Products

Location: SVA Theatre, NYC

8:00 - 8:50am
Registration and Coffee



Keynote: The Rebirth of Medical Imaging
Your next MRI scan will be less like a snapshot and more like a symphony. After more than 20 years in the field of biomedical imaging research, Dan still finds himself marveling at this marriage of physics and medicine. He will share how, with the help of artificial intelligence as well as cheap sensors, not merely medical imaging data but medical imaging devices themselves will be transformed, moving from emulating the eye to emulating the brain, and opening up new worlds of information with which to understand ourselves and rid ourselves of disease.    
- Daniel Sodickson, NYU School of Medicine, Vice Chair of Research in the Department of Radiology


Keynote: Advancing Precision Medicine Through Artificial Intelligence and Facial Analysis
With a growing database of over 10,000 diseases, FDNA's phenotyping technologies capture, structure, and interpret complex physiological information—including facial analysis— in one platform. They partner with global network of clinicians, labs, and researchers to significantly shorten the diagnostic process and advance precision medicine for patients. Moti co-founded several companies included Face.com which was sold to Facebook in 2012. He will enlighten us on how facial recognition technologies have evolved over the past 10 years, how FDNA is leveraging facial recognition today and how he foresees the future of precision medicine around the world.
- Moti Shniberg, FDNA, Chairman & Co-Founder


Keynote: Nanochemistry Delivers UV Sensors for Precision Medicine
The goal of precision medicine is to drive care and advice through a combination of individual genetic risk with actual environmental and behavioral data. Today, the rapidly expanding genomic dataset is not being matched by behavioral and environmental data. Emmanuel will share how their nanochemistry technology can uniquely analyze UV exposure. Their first product, Shade, is the first clinical-grade UV monitor designed to be a personal wearable device. With clinical trials underway and nearly 2 million UV measurements logged through all its users, the company's mission is to bring the first wearable measurement to be clinically-validated and clinically-helpful. Emmanuel has a Ph.D. in Biophysics from Columbia University and a BS in Mathematics and Physics from Mines ParisTech.
- Emmanuel Dumont, Shade, CEO & Founder


Keynote: Using Sleep Science and Computer Vision Technologies to Improve Your Child's Sleep
Sleep is a major part of our lives but historically technology has not been a part of this activity. Assaf will share how Nanit's unique technology and product vision helps children and their parents sleep better. Merging advanced computer vision with proven sleep science technologies, Nanit provides in-depth data available for helping babies, and parents, sleep well in those crucial early months and years of a baby’s development. This technology is expandable to the greater population as well, as tracking and understanding sleep patterns and anomalies can lead to early detection of other disease states like sleep apnea, seizures, autism and more. He will share the state of the art today and how he envisions sleep tech helping society in 10 and 20 years.
- Assaf Glazer, Nani, CEO


Panel: How Will Technology Impact Our Trust In Visual Content
Fake news, fake images and fake videos are exponentially increasing.  Technology is making it easier and easier to manipulate and distribute visual content. We read and experience visual content across many different sources from Google News, to Twitter, Wired, Facebook, The Washington Post and The New York Times. How is technology impacting our trust in visual content today, tomorrow and in 20 years? 
Moderator: Jessi Hempel, Wired, Senior Writer
Panelists:
- Karen Wickre, KVOX Media, Founder
- Nick Rockwell, New York Times, CTO


Break: Networking & Coffee





Keynote: Synthetic Data And A Token-Based Marketplace For AI Model Development
Neuromation aims to democratize artificial intelligence through the use of synthetic data and distributed computing to dramatically reduce the cost of development. Through its token-based global marketplace, Neuromation connects AI talent, data providers, and customers to enable the development of novel AI solutions. Yashar will share how their marketplace and token functions, and how synthetic data will impact our industry today and in the future.
- Yashar Behzadi, Neuromation, CEO


Keynote
- TBA


Panel: How Are Brands And Advertisers Benefiting From Computer Vision And Artificial Intelligence
It is hard to parse the signal from the noise in our industry about how computer vision and artificial intelligence is empowering brands and marketers. Our panelists will share the challenges and opportunities of these technologies impacting advertising and marketing. From real use cases today to future impact in 5, 10 and 20 years.
Moderator: TBA
Panelists:  
- Ophir Tanz, GumGum, CEO
- TBA


Lunch


Fireside Chat
- TBA




Break: Networking and the bar is open


Panel: Virtual Reality Programming Is Here To Stay. Where and How Is It Working?
Not just for gamers, and not a scene from Ready Player One, the latest VR initiatives have even impressed William Gibson, the sci-fi-as-reality visionary himself. NBC, with Intel True VR, offered more than 50 hours of Winter Olympic coverage from PyeongChang. But aside from the thrill of channeling your inner speed skater or bob sledder, how is VR impacting our media landscape? It’s popular, it’s expensive, and it’s becoming prevalent – but can it be sustained?
Moderator: Rebecca Paoletti, CakeWorks, CEO & Co-Founder
Panelists:
- Allison Stern, Tubular Labs, CMO & Co-Founder
- Russell Quy, B Live, Founder & CEO


Keynote
- TBA


Keynote
- TBA


Panel: From Traditional Broadcasters to Amazon, the OTT Game is Heating Up
Will Consumers Pay, for How Long, and Who Will Win? In a dynamic world of cord-cutters and cord-neverers, and with the shifting of TV ad spend to digital, (and the duopoly of Google and Facebook owning the bulk of these digital ad dollars), a new media model has emerged: direct-to-consumer. How hard is it to “go OTT” these days, from platforms to programming to measurement? What does it mean for traditional TV models and viewing habits? And does the consumer, with limitless choice (and without a steep cable bill), ultimately win?
Moderator: Rebecca Paoletti, CakeWorks, CEO & Co-Founder
Panelists:
- Daniel Ahiakpor, Tout, VP Strategic Parnterships
- Chris Bassolino, Zype, COO & Co-Founder
- Marty Roberts, Wicket Labs, CEO & Co-Founder


Closing: Congratulate Startup Competition Winner & Thanks


Drinks & Networking