ASC 3D Flash LIDAR product overview

Advanced Scientific Concepts ASC 3D Flash LIDAR product overview Penn State Diagram
Figure 1 - Click to Expand
Advanced Scientific Concepts Product Application Image Example Figure 2 - Click to Expand

Advanced Scientific Concepts Application Example Image Figure 3 - Click to Expand

PRODUCTS OVERVIEW

ASC's 3D cameras are eye-safe Flash laser radar (3D Flash LIDAR) camera systems that operate much like 2D digital cameras, but measure the range to all objects in its field of view (the “scene”) with a single “flash” laser pulse (per frame). The technology can be used in full sunlight or the darkness of night. 2D Video cameras are able to capture video streams that are measured in frames per second (fps), typically between 1 and 30 fps with 1, 5, 15 and 30 being most common. The dynamic frame capture paradigm holds true for the 3D Flash LIDAR Camera (3D FLC) as well with 1 to 30 fps (or faster if required) being possible. There are some cooling design constraints when designing the laser system for the higher fps capture rates to ensure the laser has adequate cooling operating margins.

As seen in Figure1, light from the pulsed laser illuminates the scene in front of the camera lens, focusing the image onto the 3D sensor focal plane array, which outputs data as a cloud of points (3D pixels). Each pixel in the sensor contains a counter that measures the elapsed time between the laser light pulse and the subsequent arrival of the reflected laser light onto each pixel of the focal plane. Because the speed of light (the laser pulse) is a known constant, accurately “capturing” the scene in front of a camera is a relatively straight-forward process. There is an inherent relationship between the pixels themselves in the scene, representing the entire scene at an instant in time. The point cloud data accurately represents the scene allowing the user to zoom into the 3D point cloud scene without distortion.

The 3D Flash LIDAR cameras are single units which include a camera chassis or body, a receiving lens, a focal plane array and supporting electronics. On the Portable 3D camera, data processing and camera control is done on a separate laptop computer. The TigerEye 3D camera is controlled via an Ethernet connection with the initial processing done on the camera and display done on a PC. The output of both cameras can be stored and displayed on a PC running ASC’s software; TigerView™ for the TigerEye and FLASH3D™ for the Portable camera.

Both raw and processed data can be stored or output in various formats for additional video processing using industry standard 3D computer graphics tools such as Autodesk’s Maya, 3D Studio Max or Softimage.

Figure2 illustrates raw data capture (color coded for range and viewing) of a single-pulse 3D image taken with a Portable 3D FLVC. Note the FLVC camera can be used to accurately identify vehicles or other objects without additional data. It is possible to “see” through the windshield to identify passengers and objects inside the vehicle as well. In this example, raw data was processed using the ASC’s range algorithm (essentially a raw-data image) only, color-coded for depth and intensity.

Note the shape of the rotating helicopter rotor blade, in Figure3, as it is captured without motion distortion from above. The lack of motion distortion is a major feature of ASC’s 3D cameras. The image is color coded for range and intensity which is determined using the ASC’s algorithm. The picture on the right is the same raw data rotated for viewing purposes.