ADLINK Technology Inc. has made significant strides forward in computer-on-module technology with their groundbreaking COM-HPC-cRLS module powered by Intel(r) Core(tm) processors from 13th Gen Intel Core(tm). Let’s explore its key features and capabilities that make the COM-HPC-cRLS an industry game-changer.
Utilising Intel’s Advanced Hybrid Architecture
Intel’s advanced hybrid architecture lies at the core of the COM-HPC-cRLS module, featuring 16 Performance-cores and 8 Efficient-cores to produce exceptional performance per watt. When coupled with an expanded cache of 36MB, this architecture delivers outstanding performance per watt – particularly due to support of AVX-512 VNNI and Intel(r) UHD AI inferencing; these capabilities make this module stand out as ideal for edge AI and IoT applications allowing it to analyze data efficiently while processing and analyzing information efficiently and intelligently.
Intel 13th Gen Core Processors Are Powerful
The COM-HPC-cRLS offers you up to a 13th Gen Intel(r) Core(tm) i9 processor capable of operating at 65W TDP. Additionally, this processing power is enhanced with two 2.5GbE LANs and support for up to 128GB DDR5 SODIMM memory at 4000MT/s speed. However, what truly stands out about the module is its inclusion of 1×16 PCIe Gen5 lanes: although featuring less lanes than its predecessors, its performance remains uncompromised while offering bandwidth of 32GT/s so paving the way for next-gen compute-intensive edge innovations!
Precision and Synchronization with Intel(r) TCC and TSN Support
One of the hallmarks of Intel(r) TCC and TSN technologies, which the COM-HPC-cRLS supports, is time synchronization within systems using TCC for CPU/IO timeliness while TSN optimizes time precision across multiple systems for networking purposes. Together, these features work in tandem to enable execution of real-time workloads with ultra-low latency. This makes the module ideal for demanding real-time computing applications across various sectors including industrial automation, semiconductor equipment testing, AI driven robotics, autonomous vehicles and aviation among many others.
Future-Proof Your AI with ADLINK’s Vision
ADLINK’s COM-HPC-cRLS stands as proof of their commitment to further pushing the limits of edge computing in an ever-evolving technological landscape. Not only technically adept, the module also plays an instrumental role in simplifying developers’ tasks by streamlining carrier designs for application specific carrier designs and speeding time to market, thanks to PCIe Gen5 integration. With its blend of computational power, connectivity and real-time capabilities addressing edge AI use cases elegantly providing a future proof foundation for innovative advancements.
COM-HPC-cRLS
Up to 24 cores (16 P-cores + E-cores), 32 threads
16 PCIe Gen5 lanes, 8 PCIe Gen4 lanes
Up to 128GB DDR5 SO-DIMM at 4000 MT/s
2x 2.5GbE LANs
AI inferencing (AVX-512 VNNI, Intel® UHD)
What is Intel® UHD AI inferencing
Intel® UHD AI inferencing refers to the utilization of Intel’s UHD (Ultra High Definition) graphics architecture for performing AI inferencing tasks. AI inferencing is the process of using a trained machine learning model to make predictions or decisions based on new input data. This is a crucial step in many AI applications, as it allows models to be deployed and used in real-world scenarios.
Intel’s UHD graphics architecture, which is commonly found in their processors and integrated graphics solutions, is being leveraged to accelerate AI inferencing tasks. This architecture incorporates specialized hardware features that can enhance the performance of certain AI workloads, particularly those related to inferencing. These hardware features are optimized for matrix calculations, which are a fundamental operation in many machine learning models.
The term “UHD AI inferencing” highlights Intel’s integration of AI capabilities into their graphics architecture to enhance the processing of AI models during inferencing tasks. By using UHD graphics for AI inferencing, Intel aims to provide better performance, efficiency, and energy optimization compared to general-purpose computing approaches. This is particularly beneficial for edge computing scenarios where power efficiency and real-time processing are critical.