Friday 12th August 2022

3D Imaging And LiDAR – Poised To Dominate Autonomy And Perception

[ad_1]

We live in a world full of data and imagery. With the invention of the camera in the late 1800s, entertainment, consumer, space, and medicine applications proliferated. The launch of the video camera in the early 1900s continued this revolution, and was accelerated by significant progress in supporting technologies like semiconductors, computing, image processing, machine learning and artificial intelligence. Typically, these focused on 2D renditions of images and data.

3D imaging started with specialized applications like magnetic resonance imaging in the 1980s (MRI), outer space-based LiDARs (1993) and dental imaging (1995). Since then, it has been maturing and gaining significant traction in diversified applications. The data can be generated based on various active or passive techniques. Active techniques include transmitting electromagnetic (X-Ray, radio, optical) or acoustic (sonar, ultrasonic) waves onto the object of interest, and detection and analysis of the return energy (amplitude, frequency, etc). The time or phase difference between the transmit and receive signals provides the depth dimension. Passive techniques such as stereo cameras (imaging the same object from two different spatial perspectives) can also be used to generate the required 3D data. Finally, 3D information can also be extracted from monovision cameras through a combination of machine learning and signal processing techniques, although this is generally inferior in fidelity and compute speed relative to direct 3D imaging and measurement.

LiDAR is one of the most discussed and deployed 3D imaging techniques for AoT™ (Autonomy of Things) applications which includes autonomous vehicles (AV), Advanced Driver Assistance Systems (ADAS), autonomous trucking, construction, mining, surgery, smart cities and smart infrastructure. Velodyne pioneered the use of surround view LiDARs for AVs during the DARPA Grand Challenge in 2008. In the decade since, LiDAR has occupied a “must have” status by a majority of automotive OEMs for ADAS, and AV driving stack companies for localization, mapping and Level 4 autonomous driving. Tesla

TSLA
and some others believe that LiDAR is not required for ADAS and AVs – their approach is to use monovision cameras to extract 3D information through artificial intelligence and machine learning techniques. While intriguing, such approaches are in the minority and are yet to be validated in real life environments.

Early implementations of 3D imaging relied on classical 2D image processing methods. This is not efficient from a compute perspective and filters out significant amounts of useful data. In recent times, the amount research devoted to 3D vision and image processing has accelerated. At the premier global conference on imaging (IEEE Computer Vision and Pattern Recognition, or CVPR) in June 2021, 3D Computer Vision Imaging dominated among 25 topic categories, with 44 presentations (out of a total of ~200).


LiDAR point clouds are not intuitive for humans to visualize and need processing for computers to act on. As the technology and applications mature, software companies specializing in processing of LiDAR data are emerging as critical partners for LiDAR companies. They help unleash the true power and market potential of 3D imaging data for safety and productivity applications. Seoul Robotics is one such company – a team of 40 software and algorithm specialists based in Seoul, South Korea that works with a number of LiDAR companies to integrate software that processes raw LiDAR point cloud data to produce application specific information. The software is agnostic to the actual LiDAR architecture and technology. According to Han Bin Lee, CEO of Seoul Robotics: “3D image processing requires fundamentally different techniques since voxels (3D data element) represent an order of magnitude more information (a cube vs a rectangle) and costs for annotating this data manually is very expensive”. Figure 2 compares 2D and 3D imaging in typical automotive scenarios:

As seen in Figure 2, data is structured in different ways for 2D and 3D imaging. Given the costs of human data annotation and labeling for machine learning, Seoul Robotics has built in auto-labeling capabilities as part of its object libraries and algorithms. Apart from the automotive case, Seoul Robotics is also engaged in a factory automation and logistics project in collaboration with a major automotive OEM. Their software integrates 3D imagery from hundreds of short and long range LiDARs from different suppliers in order to automate movement of thousands of vehicles and trucks in a factory environment. The system achieves this with infrastructure-based 3D perception connected to a 5G network. It is the first of its kind to be deployed on a large commercial scale. 2D cameras were used initially but produced an unacceptable number of false positives, degrading system efficiency. Simple, single beam LiDARs were also deployed, but did not provide adequate safety margins and performance. Stereo cameras were severely limited in range. A solution using a combination of high point density short and long range LiDARs, knitted together by Seoul Robotics’ software has overcome all these issues. The system is actively being co-developed with other technology providers, with planned implementations at other factory sites. Han Bin Lee: “We are looking forward to this significant implementation of 3D Vision Technology and expect it to provide massive automation benefits at a high level of safety. The experience gained in a project of this scale will be invaluable for other smart city and smart infrastructure applications”.


Factory automation is exciting – but how about space debris mapping? This sounds a bit, well, spacey, but it is a real problem considering that we have been sending people and equipment into space since the 1960s. There are ~1M estimated pieces of human-created debris in space, in sizes ranging from 1 cm to several meters. These continue to multiply as collisions occur, creating what is known as the Kessler Syndrome which posits that “the density of objects in low Earth orbit (LEO) due to space pollution is high enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions.” Currently, only about 5% of the ~1M debris objects are mapped and tracked. The implications of this are immense since it constrains the launch of future vehicles (imagine space tourism and billionaires sipping cocktails in a hail of debris!) for various space exploration efforts.

Digantara is an Indian company focused on space debris mapping (Disclosure: I am an advisor). The company was started in 2018 by a team of engineers/entrepreneurs to create solutions to the space debris mapping problem. They were invited to present their business plan at the prestigious 2019 International Astronautical Federation (IAF) start-up pitch event in Washington D.C. This won them accolades, and more importantly funding. Digantara’s data products will prove invaluable for trajectory planning for future space launches, predicting when collisions are likely to occur, updating the debris maps and providing input to companies as they tackle the problem of space debris removal. The Indian Space Research Organization (ISRO), a leading global space agency, provides grants, advise and technical support to the company.

Current methods for space debris mapping are ground based, and use a combination of radar and 2d optical telescopes. The mapping is constrained by weather conditions, as well as lighting (cannot map during the day because of solar noise and at night because of lack of illumination). Short range mapping occurs with radar whereas the telescopes can only image at very long range due to the long integration times involved….

[ad_2]

Read More:3D Imaging And LiDAR – Poised To Dominate Autonomy And Perception