The article written below by our business knight Geoffrey Ejzenberg will provide you some insights on the solutions our technology offers.
Will the real autonomous sensor please stand up?
Granted, the article below is almost a year old...but as one can see, it is still accurate, with ever more companies in the industry -some with variable success- trying to come to market with their variation of a "solution' for an autonomous system based on LIDAR. Today there are more than 100 LIDAR companies (publicly) involved in the autonomous vehicle sensor industry, which proves the point that the hardware sensor isn't the differentiating factor. All try to provide vehicles with an understanding of the real world through artifically laser emulated point clouds "glasses". If it is complex to grasp a reliable, safe and accurate idea of your surroundings from real images, imagine how difficult this must be from a point cloud "image".
It is a sign on the wall that there hasn't emerged a silver bullet technology yet in the otherwise fast moving industry of autonomous sensor systems. The author of the above article published on WIRED, predicted the wash out of LIDAR based autonomous system companies, in an ever more overcrowded industry. All fighting the same battle with the same sword. Some with a moving sort that moves, other with a static one (solid state)... but all with LIDAR.
There is a plethora of companies out there trying their best to solve the infinitely complex task of understanding the real world vehicles operate in today by interpreting a collection of dots generated by a laser that would make pointillist painter Signac jealous. The laser emulated point cloud "images" recreate the contours of our world received by laserscanning the depths of our surroundings. All of this is very nifty and sweet but how will you read a sign, or see shades or a colour of a traffic light or a balloon....? Oh yes, did I mention that the scanning effectiveness is pretty much reduced to zero when it rains, snows or when there is fog or dust? The LIDAR systems are so good at scanning that they scan the rain, snow and fog too... great scanning for that matter but these weather phenomena cause noise / blur on the point cloud images to an extent comparable to your view when wearing glasses in the steamroom.
So, what is the challenge all are trying to solve here? I prefer to explain it -by lack of engineering background or because I believe that one must be able to make a 12 year old understand the most complex systems- as follows: "Driving a vehicle as good or preferably better than the best human driver can, but without human interference". Thus what we really are saying: a real full autonomous (level 5: https://en.wikipedia.org/wiki/Self-driving_car ) vehicle should mimic human best driving behaviour and even outperform the mimicing ... giving the vehicle actually SUPER-HUMAN driving skills.
There we go, I have said it, the word is out: "SUPER-HUMAN". Any 12 year old can understand that when you want to do something better than a human, you have to be SUPER-HUMAN. So, if we humans can see the depth, contours, colours, gradings and so on with our own eyes, the logic commands that we should upgrade the ability or capacity of the human eye and install this in the to be automated vehicle. Today, technology exists to upgrade this human eye perception quality with technology so that the eyes of an autonomous system can see far beyond and above the visual spectrum of what our eyes can perceive. Going multi-spectral allows you to see in the dark, through rain and fog and snow as if it was a clear sunny day, on top of the full HD or 4K of all the lively colours and (e)motions the human eye can perceive during the day time. When "hacking senses" to improve the ability of that sensory system one shouldn't throw away what nature has developed and perfected over millions of years. No scientist can outsmart the millions of years of nature's evolutionary work.
This is exactly where we at Autonomous Knight started when we developed our patented sensor system. Start from what the human eye can see and add the relevant bandwidths in the scenarios that occur in the "life" of a car as to give it SUPER-HUMAN vision. Needless to explain that it is infinitely less complex to process and understand a REAL image as compared to a laser generated point cloud. The better a vehicle can "SEE" the better it will be at driving (autonomously ;-) )
Indeed, we developed a multi-spectral sensor system. Hold-on dear critics: I hear you say that there is no time to fuse all this information together and syncronize this in time and space....and, you are RIGHT. Fusing all this information together from different sources is impossible in real time autonomous operations, so let's look at the eye again and see how this solves the challenge and we did just that. Our SUPER-HUMAN sensor system thus outperforms the leading LIDARs in all weather resolution by up to a factor 10, day and night,... and we are not done yet... more to follow in our next article!
Obviously, we cannot tell you here what our sweet secret sauce is.... but we are only a phonecall away and ready to partner!
Autonomous Knight greetings, dedicated to fast-track level 5 autonomy!