Want to have a 3D effect monitoring environment? 【full text】

Although there are many forms of high-definition digital video surveillance, system integrators are still looking for a new way to use modern devices to interpret detailed day-and-night surveillance images.

Video analysis, image fusion, and high-definition are three directions for R&D.

A lazy guard stared at several monitors and drank coffee cup after another. This is the current U.S. military installation and homeland security video surveillance system.

At present, the threats faced by the United States are extremely complex and diverse. Based on the use of video monitors, it has become crucial to have detailed video image analysis capabilities.

Surveillance users can use a variety of monitoring methods, such as infrared (including short-wave and long-wave), image fusion, satellite links, and unmanned aerial system (UAS) video transmission.

Charlie Morrison, director of full-motion video solutions at Lockheed Martin’s Information Systems and Global Services Division in Gaithersburg, Maryland, said: “Using video analytics technology, we can Integrated use. This technology can track and identify targets."

Morrison went on to say: "Because of the huge amount of information, we need to organize information and analyze the nature of the information in advance, so as to ease the pressure of monitoring operators when processing data. We have already brought this technology to market."

"The video surveillance screen of the future will be like the world's largest sports network ESPN and CNBC TV channel, scrolling the results of the sporting events and stock quotes under the screen. With the emergence of multiple intelligent video processing technology, the monitor screen is no longer Only a single screen is displayed."

Lockheed Martin’s Morrison team and engineers from Harris, based in Melbourne, Florida, are working on this technology to verify whether it can be used in military surveillance and intelligence applications. Morrison said: "We hope to apply this commercial technology to the Department of Defense (DOD) video surveillance system."

"Full-motion video has a unique advantage in the collection and analysis of intelligence data," said Jim Kohlhaas, deputy director of the Space Solutions Division at Lockheed Martin. “Thousands of platforms collect important video intelligence every day. The challenge we face is how to collect and classify such a large amount of information and analyze it from the mountain of data to find out the important information needed. Interpretation and exchange of these intelligence data."

"Audacity" audio editing software is one of the main video analysis tools developed by Lockheed Martin. "Audacity" software can mark, classify digital material, and compile information directories. Lockheed Martin publicly revealed that the software also has intelligence analysis capabilities, including video mosaic processing, facial recognition, target tracking, and intelligent automatic alarm function of the area of ​​interest.

Harris Corporation uses the full-motion Video Asset Management Engine (FAME). Lockheed company revealed that the engine can directly integrate video, chat and audio in the video stream. The equipment used by the broadcast communications industry can generate a digital building by integrating processing and storage engines to provide an enhanced video stream for the infrastructure.

Morrison said: "What we are doing is to maximize the respective advantages of the full-motion video asset management engine and the 'Audacity' software, and to integrate the two technologies closely, so that people can't tell which specific function is which software The result of the function."

Lockheed Martin is committed to "filtering information in complex intelligence" to analyze video and identify goals. Lockheed Martin also revealed that the two companies work together to focus on the research and development of video capabilities and real-time video and video recording analysis capabilities.

It is reported that the research team is also responsible for developing a solution that can “cross-organize, cross-region catalog, store, and securely share video and intelligence”.

Morrison said: "More and more asset monitoring systems use infrared, day and night image monitoring, target recognition and other means, we can provide information filtering for the above asset monitoring. This software can organize data, help operators make decisions, shorten Reaction time improves reaction efficiency."

Morrison said they have completed one such monitoring deployment. However, he refused to disclose the location of the deployment and the name of the user who adopted the plan.

Shorten decision time

Morrison said: "By joint efforts, video surveillance operators can use the graphical user interface to easily complete the operation."

In the case of security applications in ports, for example, Tony Morelli, Project Manager of Lockheed Martin’s Information Systems and Global Services Department, said: “Operators can not only learn about the arrival of ships from the monitoring screen, but also It will also be able to understand the vessel’s scheduled cargo and actual cargo.”

Morrison continued: "The interface can also provide a chat function that uses streaming technology to implement audio conversations between two commanders or operators. In the task video stream, you can also see chat messages."

File analysis

Morrison said: "The current video archiving process is to divide the video recording into several 30-minute or 20-minute video clips. So, it will take a lot of time to retrieve information, especially when you look for 2 seconds. The number of frames."

He added: "This is like asking for a sentence in a search engine with 10,000 WORD documents."

Morrison said: “It's much faster to use the Metatag feature to tag data. For example, if you see a red Ford Mustang sports car parked downstairs, the surveillance staff can detect the car for three days. In front of where, then play all the videos before and after the sports car appears. This is like a digital video recorder or hard disk digital video recorder (Tivo) that allows you to recall some of the missed pieces on the TV."

Morrison continued: "This example reflects the system's geographic recognition capabilities. The system also has time, keyword, and geographic search capabilities."

Morrison pointed out that this solution cannot search spoken words but can transcribe audio data into text. In this regard, Morrison cited the Ford Mustang sports car as an example. The operator of the surveillance video can “search for all the information related to this red Ford Mustang sports car from the 10th day to the day.”

Interactive analysis

Morrison said: "The research direction of the research goal is to integrate video analytics capabilities with touch screen functions. Operators can learn more detailed information by simply picking one out of a large number of data - whether it is a concern of police attention. People or suspect vehicles, or dialogue between two leaders in a particular area of ​​concern."

One of the advantages is that "the program can be combined with various monitoring equipment". Currently, the U.S. Army, Naval Air Force, and Marine Corps all use a large number of different types of monitoring equipment to obtain information. The joint solution developed by Harris and Lockheed Martin merges the functions of all devices. Morrison said: "The key factor in achieving this result is standardization." If all users are using the same metadata standard, the integrated classification of data information will not be too complicated, even without the use of related equipment tools. Morrison also pointed out that the video standards are set by the Motion Picture Standards Authority (MISB).

Morrison said: "Touchscreen capabilities still need to go through several years of development research, but the system will be based on the operator's intentions in the background, such as past search content and what is being viewed on the screen, to whom it believes that the operator will The information needed is flash.” This is similar to websites like “Amazon” where they analyze the customer’s buying trends and then recommend to customers what they might be interested in.

Panoramic image fusion

If the video truly captures all the details, then the analysis and extraction of video data will be greatly improved. Engineers and scientists at the “GEFanuc” Intelligent Platforms (Bracconel) Co., Ltd. (formerly Octec Ltd.) in Bracknell, England, are studying how to model the image of humans through the eyes and transmit them to the brain. Information is analyzed to create a new video processing model.

“The problem is how to present the image in this mode to the human eye in an intuitive manner because human eyes respond to physical movements.” Larry Tse, Deputy Director, Business Development, GE Fanuc “GE Fanuc” Intelligent Platforms Larry Schafer said. The symbols displayed on the screens by many current monitoring systems are too cumbersome, making operators dazzled and unable to respond in time.

Engineers at GE Fanuc pointed out that if the images are presented more intuitively on the screen and the use of symbols is reduced as much as possible, the operator of the monitoring equipment can understand the information more thoroughly and can make correct decisions. .

Shepherd and his team hope to reconstruct the presentation of information by creating a series of cameras equipped with all-weather sensors. Shepherd said: "The device is called a distributed aperture sensor."

Shepherd said: "To some extent, this is a panoramic zoom camera. When the operator walks out of a tank or other vehicle, he can understand the situation behind him through this camera."

Shepherd also said: "The image is a form of image fusion. The daytime and night images are superimposed with 160 degree or 360 degree visibility, and the contrast of the resulting image will be more abundant. Therefore, the eye is necessary. Intuitive stimulation."

Image fusion "The sensors that are constantly operating allow us to return to the data extraction process," providing a comprehensive view of the operators. If there is a target behind the monitoring personnel, whether the target is hiding in the building or sitting in the driving vehicle, the operator can obtain the information within a first time by adjusting the touch screen.

Shepherd said: "This will create a 3D surveillance environment, and the surrounding vision will become as clear and clear as the one in front of you."

According to the data of the "GEFanuc" intelligent platform, this video processing technology uses the IMP20 video processing mezzanine card developed by GE Fanuc and the IMP20 mezzanine card is an additional module in the company's ADEPT104 and AIM12 automatic video tracker. Auxiliary image fusion function.

The data also shows that the device's image fusion algorithm can provide faster running time and reduce memory consumption. The IMP20 mezzanine card also has a built-in information source powerful engine that implements rotation, scaling, and conversion to compensate for image distortion and misalignment between imagers, reducing the need for precise matching of the imager, and reducing overall system cost.

Shepherd said: “This is a real-world scenario recognition that combines the images taken during the day with thermal imaging to add target recognition capabilities. This technology also applies to many of the same short-wave infrared technology, and these short-wave infrared The use of technology has been a long time ago."

Short Wave Infrared (SWIR)

Although short-wave infrared (SWIR) sensors have existed for a long time, the technology is still indispensable for mission-critical surveillance applications.

Robert Struthers, director of sales and marketing at Goodrich Intelligence, Surveillance and Reconnaissance Systems Inc. (Pre-Sensor Infinity), Princeton, New Jersey, United States, said: “Experiments show that even if there is fog, haze, With the interference of dust, smoke, and other external atmospheric environments, the short-wave infrared still has good imaging capabilities. Combining short-wave infrared technology with long-wave or medium-wave infrared technology into the thermal imager can effectively improve the driver’s ability at night. Depending on the capability and continuous surveillance capabilities of the airspace, and can improve the performance of the night vision imaging system used by the soldiers. Fundamentally speaking, the thermal infrared band is responsible for reconnaissance and the infrared short wave is responsible for identification."

Stu Ruthus said: "The high performance of the short-wave infrared camera lies in its ability to shoot urban night scenes without a flash, or to maintain image color without distortion at high exposures. Compared with visible light cameras and night vision goggles (NVG) The infrared wavelength-short-wave indium gallium arsenide camera (InGaAs) has a higher wavelength sensitivity. Combined with the wavelength matching function and the concealed illumination device, soldiers can avoid attacks due to exposure of the night-vision goggles.

Stu Ruthus said that short-wave infrared imaging capability is still outstanding during the heat conversion process (near dusk/dawn).

He said: "The short-wave infrared spectral band produces images based on reflected light, while other thermal imaging systems use temperature imaging to differentiate the principle of imaging from other systems. The image produced by reflected light imaging will be generated by the thermal spectral band imaging system. The image is sharper, and the temperature-imaged image looks like a black and white visible image with target/biometric capabilities."

Stu Ruthus said: "Goodrich's infrared short-wave indium gallium arsenide camera in the infrared short-waveband (pixel array of 640 * 512) can provide the highest resolution, most cameras have a closed camera shell, or Used as a miniature open frame module embedded in a surveillance frame and a small optical system."

Goodrich's latest indium gallium arsenide camera model is the SU640KTSX-1.7RT, which features high sensitivity and a wide dynamic range. According to the data sheet provided by Goodrich Corporation, in the passive monitoring and use of lasers, the camera can convert nighttime images into images taken in the daytime in the short-wave infrared spectral band. It is reported that the camera comes with an automatic gain control (AGC) system that can amplify the image and improve the built-in non-uniformity correction (NUC) function. Due to the lower wave limit, the camera can even capture photons that were previously captured only by silicon-based imaging systems.

Sponsored by the US Department of Defense Advanced Research Projects Agency (DARPA) in Arlington, Virginia, to promote Goodrich's R&D in shortwave infrared cameras. Stu Ruthus said: "The main purpose of the research is to introduce a device suitable for small-scale drone platforms and portable night-vision components. The device also has the characteristics of small size, light weight, low power consumption."

Stu Russe added: "The widespread use of short wave infrared cameras has promoted the development of concealed lighting devices and special anti-reflective coated lenses, maximizing the night vision capabilities of soldiers."

Posted on