A major challenge in our lives and the visual technology ecosystem is how to find, filter, search, and monetize contextually relevant content. Billions and soon trillions of visual content files are being uploaded every second around the world by consumers and businesses to inform, entertain, and visually communicate.
Historically this content was keyworded manually by humans and via platforms like Mechanical Turk so that the images and video could be associated with relevant search terms and subsequently found and monetized.
Finally we are experiencing new advancements in the technology that allows computers to see.
Computer vision, artificial intelligence and neural network advancements leveraging cloud computing infrastructures can now analyze in real time what is happening in images and video content.
We are always looking for entrepreneurs who are solving problems in the visual technology ecosystem with a combination of deep domain expertise, the vision to solve large critical problems, and the passion to inspire other brilliant people to collaborate around building a valuable business.
Matthew Zeiller and his team have those qualities. Matthew is a passionate PhD graduate in computer vision from New York University. In 2013, he founded his company, Clarifai, after taking the top five spots at the ImageNet annual contest for software that identifies the content of images. I had the pleasure of meeting Matt in early 2014, when the company was only two people—Matt and Adam Berenzweig.
We invested in Matt, Adam, and their Clarifai vision in the summer of 2014, alongside Qualcomm Ventures, Google Ventures, Nvidia, and respected angels. Since then, they have added more impressive computer vision experts to their team, continued with fast iteration of their technology, and customers are already leveraging their services.
Today Clarifai announced that they have raised a $10M Series A led by Union Square Ventures. We are pleased to continue investing in the Clarifai team alongside new investors Union Square Ventures, Lux Capital and Osage University Partners. Existing investors Google Ventures, Qualcomm Ventures, Nvidia Ventures, Corazon Capital, LDV Capital and New York University also participated.
Clarifai make sense of data via deep learning. Their cloud platform leverages artificial intelligence and computer vision to understand the visual content in images and video. Their technology automatically associates metadata to images and video so the content can be more easily categorized, filtered, searchable, monetized, and distributed where contextually relevant.
Upload images or connect to the Clarifai API and their system will deliver keywords within seconds. ©Clarifai
Television stations, media publishers, stock photography, advertisers, brands, video and/or photo platforms, satellite image archives, medical imaging assets, and anyone who creates and manages visual content will benefit from the Clarifai solution.
Content creators, brands, advertisers, and publishers will be able to add a contextually relevant advertisements alongside the exact frame in a video that matches their branding needs. For example, Coca Cola could advertise their products only next to visual content with a multi-racial family driving a BMW with the roof down along the coast while laughing at the youngest child’s jokes.
Upload videos or connect to the Clarifai API and their system will deliver keywords within seconds. ©Clarifai
Prior to Clarifai, finding this exact frame manually across the internet was harder than finding a needle in a haystack. Finding all of the frames with a desired group of keyword signals in billions of hours of video footage would not be efficient, accurate, or affordable for any platform without leveraging computer vision and artificial intelligence.