AutoX is Democratizing Autonomous Driving with a Camera-First Solution

 ©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Leaving his role as founding director of Princeton's Computer Vision and Robotics Lab, Jianxiong Xiao (Professor X) founded AutoX. He spoke at our LDV Vision Summit 2017 about how he is working to lower the price of entry into the autonomous driving field with an innovative camera-first solution.

Early Bird tickets are now available until March 25 for the LDV Vision Summit 2018 to hear from other amazing visual tech researchers, entrepreneurs and investors.

Today I'm going to talk about AutoX. We're a company working on the self-driving car. Why self-driving cars? If you look at the tech revolution in the past few decades, we have personal computers, we have internet, we have SmartPhones. This tech revolution already changed everyone's life. It's not just a fancy tool for scientists, but we actually changed everyone's life.

If you think about the future, many things are going to happen. But if you think about what is the major difference 30 years ahead from now, one of the biggest things is probably all the cars will be ready to drive by themselves. That's what made me very excited about self-driving cars. Transportation is a huge player in human society as well. So I would see this as one of the biggest applications ever for my expertise in computer vision and robotics.

AutoX is a company focused on self-driving technology with the mission to democratize autonomy.

What does that mean? Here we draw an analogy with the computer technology. If you think about computers in the past few decades, a few decades ago, yes, we do have computers, but each computer is so big, and what's even more is they are so expensive. With a million dollar computer in a huge server room, only a very small number of people in the world, including top scientists, top researchers, can have access to computation. At that time I would say that technology is amazing, but the impact of this technology to society is very, very limited.

Now think about life today. Everyone nowadays has a $500 SmartPhone. In this stage, I would say that this is what truly made me excited about technology, is it creating universal impact for everyone.

If you think about self-driving car technology today, it's pretty similar. Each self-driving car costs $1,000,000, or even more. It's much more expensive than to hire a few people just to drive for you. So self-driving car technology, at this stage, I would say, does not make much sense to the general public.


We believe self-driving cards should not be a luxury, it should be universally accessible to everyone.


At AutoX, our mission is to democratize autonomy. It's to make self-driving cars affordable and at the same time technically robust for the people, for every citizen to use. We believe self-driving cars should not be a luxury, it should be universally accessible to everyone.

If you think about self-driving cars, why are they so expensive? Here is a picture of the Baidu self-driving car. Each car costs about $0.8 million USD. Most of the costs come from the sensor that people use, high-end Differential GPS, use high-end IMU, as well as this monster, the LIDAR. The LIDAR on the top is the Velodyne 64 big LIDAR, costs $80,000 USD these days.

Putting aside the cost of the LIDAR, if you look at the LIDAR data, I would say the autonomous driving industry has a blind faith in LIDAR. For example, the LIDAR has very, very low resolution.

Here is a simple question for you, is this LIDAR protocol representing a pedestrian or not? Look here. Everyone here has perfect intelligence. You may see that, okay, maybe this is a pedestrian. But how about this? Is this a pedestrian, or is it a Christmas tree? In fact, both of them are actually pedestrians, coming from this.

A pedestrian viewed from low resolution is still probably able to recognize, but if you want to drive your car safely, you need to recognize some more subtle detail. Like, for example, the curve of the road. If you cannot recognize the curve of the road, the car is going to drive to the sidewalk, which endangers the human pedestrian. So I would say that high resolution really matters. High resolution enables detailed analysis of complex scenes, which is required for level 5 autonomous driving

 ©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

The other draw-back for LIDAR is that it only depicts the 3D shell of the object. But most complex situations in the world actually depict by the appearance, rather than the 3D shape, such as road marking, traffic signs, curves, traffic lights, and so on. At AutoX, we focus on a Camera-First Solution. We're not against any sensors, but we are focused on using the video camera as our primary sensor, to ensure most of the information necessary for very safe autonomous driving.

We're a company building Full-Stack Software for autonomous driving, which includes understanding perception, the study of auto-dynamic objects, as well as the ability to make decisions and train the car how to drive. The last step of our Full-Stack Software is to control the vehicle, to execute this plan. To train the vehicle to detail the full plan and carry it out.

We're a very young company, we were founded in September 2016. In the past eight months, we're making a tremendous progress. Our company is based in San Jose in California, big enough for doing a lot of testing of autonomous driving.


We are accepting applications to our Vision Summit Entrepreneurial Computer Vision Challenge for computer vision research projects and our Startup Competition for visual technology companies with <$2M in funding. Apply now &/or spread the word.


Here's a demonstration where we're using a purely camera-cased system, with no LIDAR, no radar, no ultrasonic, no Differential GPS, to drive the vehicle. Here, we show some autonomous driving things. On the top left, we're showing our car driving in a dense urban scenario, with a lot of traffic, making turns and so-on. On the bottom left, we show our car driving in a curvy road, making a lot of sharp turns, to demonstrate that it's very important that our car perception system can be able to recognize the road in better detail and in real time.

On the right, we're showing some video we've taken at nighttime. Using the camera, it is still possible to drive at nighttime. To demonstrate the power of this video-based approach. And may I mention, this demo, we're using only cameras with GPS as the only sensor. We're not using any other sensors, but in the production of the cars, we are welcome to other sensors for integration as well. The reason for this demo, is to demonstrate the power of camera. Because personally I believe it is mostly ignored or under-praised by the autonomous driving industry.

In the past eight months, we have built a thing, a very, very small thing, but very good thing, to carry out this mission. And we're very excited to continue following this path to make the self-driving technology become a reality.

 ©Robert Wright/LDV Vision Summit

©Robert Wright/LDV Vision Summit

Here is another video demonstrating our camera-based system, driving under a different scenario. Here, as you know, in California it is actually very difficult to find bad weather. So in the past two months, we're finally finding data when it actually rains, and we're so excited that we brought out the car to take a video like this. You can see that our camera-based system actually drives quite well under the heavy rains, and you can see that here our car is actually driving in a residential neighborhood. There's no real marking on the road, that's also made it particularly challenging. Auto-rendering the road is very challenging to authorize.

Here is another video during a raining day, there we see that our car is going through the bridge. That makes the lighting very bad, then very bright again, but we still demonstrate that this camera-based system is possible to work. Some of you guys can probably recognize where we're driving, doing this test demo. The logo it says here is the city of Cupertino.

As I mentioned, we're a very, very young company. At this very early stage we're still demonstrating the potential for this camera-based system.

Watch the video: