Video processing is a crucial technology that is essential to the modern world. It enables electronic systems to record the video, process it, and extract information contained in videos. Processing video, in turn is the base technology that powers a myriad of applications, ranging from intelligent city traffic management to broadcast.
Each of these applications requires the ability to process frames with high resolution, such as 4K or 8K resolution with frame rates that is 60 frames per second, or greater. This is equivalent to rendering 500 million pixels in a second at 4K resolution , or 90 millions of pixels in 8K resolution. These are quite a feat for a basic video capture and display pipeline that simply displays the video received. When further processes are required–for instance the ability to classify and detect the object or to perform transcoding, the processes required to reach an acceptable frame speed are significant. This is especially true when the video analysis is time-sensitive for example, like in smart city traffic monitoring systems where sophisticated algorithms are used to predict and smooth traffic flow with machine learning and artificial intelligence. Implementation of these algorithms could cause app bottlenecks that affect performance in a significant way, especially when processing involves multiple operations on a single pixel, or array of pixels.
Making complicated video processing systems goes beyond the requirement for pure processing capabilities. It also requires an I/O capacity that is high enough to connect with a wide variety of external sensors cameras, sensors, and actuators. If you are looking to build a smart city transportation management program, such might require the support of multiple video sensors while offering high-performance interfacing to networks and local recording and storage of crucial events with JPEG XS. For a different example take a medical-surgical robotics system that is based heavily on the processing of video. The system must communicate with sensors, simultaneously controlling illumination and control finely various actuators and motors. In both cases the interfacing challenges could be substantial. There is a huge demand in the industry for equipment capable of supporting many high-speed sensors, and offer the capabilities to interface with a vast array of industrial and networking interfaces.
The Role of FPGAs in Video Processing
One of the most popular techniques utilized by system design engineers to solve these interfacing and performance challenges includes Field Programmable Gate Arrays (FPGAs). FPGAs provide the designer logic capabilities which can be utilized to create highly parallel pipelined processing designs. As with the flexibility that the fabric inside the I/O structure in FPGAs are also extremely adaptable, which allows the interfacing of low-speed and high-speed connections. This permits the FPGA to accommodate a variety of high-performance video sensors as well as networking interfaces as well as implement low-bandwidth industrial, traditional and custom interfaces to control sensors, actuators motors, and various other devices external to the FPGA.
The integration algorithmic video-processing within logic allows for the development of highly parallelized implementations. The parallelization of these implementations increases precision and speed of processing, as processor bottlenecks can be eliminated.
Selecting the FPGA
Naturally, the selection of FPGA will differ between different applications to provide the most efficient solution. Designers choose devices according to their logic capacity, interfacing capabilities, performance, and hard macros that are specialized. For instance, devices in the Intel (r) Arria (r) 10 family are typically chosen for Pro and medical video processing applications in A/V as well as those belonging to the Stratix (r) 10 family are ideal for broadcasting solutions. Alongside high-performance logic it is also a high-performance processor. Arria (r) 10 family offers developers with an array of high-bandwidth connectivity solutions that are compatible with GX and GT families. GT as well as the GX families, and the SX family includes Arm (r) A9 processors that allow the use of sequential processing such as the human machine interface (HMI) GUIs communications protocols, etc.
The Intel (r) Stratix (r) 10 families are a significant improvement in capabilities with integrated Arm (r) A53 cores within SX devices as well as high-performance floating point and throughput solutions for GX devices, as well as support for AI/ML on NX devices. The variety of options to choose from lets the designer pick the appropriate FPGA to the specific application that is in hand.
No matter what device is selected no matter what device is chosen, designers need the most diverse range of IP that is ready for production to meet ever-changing deadlines for projects.
In Intel (r) Quartus (r) Prime Design Software developers can benefit from Intel’s extensive video and Image Processing Suite. This suite comes with twenty-plus optimized, ready for production IP blocks that provide the core functionality required to create pipelines for image and video processing. To allow for a fast integration with and connection to the cores of the VIP suite the IP blocks are connected via the Intel Avalon (r) streaming interface. This allows a mix-and-match method of using block video IP which allows blocks to be inserted within the processing pipeline of the video when needed. The video IP supplies the designer with a variety of features that include:
- Interfacing supports a variety of sensor and camera interfaces ranging starting with HDMI through SDI, DisplayPort, MIPI as well as Ethernet (GigE Vision)
- Capture, correction and Processing Capture, Correction and Processing the video to meet requirements for processing, including the conversion of color spaces, de-interlacing clipping, gamma correction sync, and chroma resampling. Also, you can remove any spectral or temporal noise from the video with 2D Filter as well as Cleaner the stream of video.
- Formatting The capability to format a video output with Alpha mixing, scaling and interlacing.
- Buffering: Support for writing and reading frame buffers within DDR. This allows the developer to modify output and input frame rates, and also make the processed video accessible for processing by the system to perform advanced video processing.
- Analytics, Test support for on-the fly videos and pattern-generation, which allows the video processing process without the camera or sensor being present.
Although there is no doubt that the Video and Image Processing Suite is vast, other IP functions may be required. In this scenario developers can take advantage of an extensive range of ecosystem IP from partner partners. The IP partners are IntoPIX and IntoPIX, which offer an array of compression IP such as JPEG-XS, Rambus (previously Northwest Logic) and IntoPIX, which offer MIPI interfacing solutions and Macnica which offers various solutions for video over IP.
The wide variety of Intel (r) and partner ecosystem IP allows developers to build a custom video processing software quickly and efficiently. For algorithmic implementations that are custom developers can use the Intel HLS compiler. HLS compiler lets developers create the algorithm in a higher-level programming language. This further reduces time to design and verify as compared to Register Transfer Level Implementation.
The development of modern video processing software that can handle 4K and 8K resolutions requires extensive interfacing and processing capabilities. The broad range of Intel and Partner ecosystem video processing and connectivity IPs allow developers to select and use capabilities, and the powerful FPGA fabric is perfect for processing video streams with high resolution. These features, together with an efficient software design flow allows for the rapid development of the next generation of smart video apps.