NewsNeurxcore Unveils Next-Gen Neural Processor to Revolutionize AI Inference Applications

Neurxcore Unveils Next-Gen Neural Processor to Revolutionize AI Inference Applications

Category articles

Leading AI solutions provider, Neurxcore, has unveiled its state-of-the-art Neural Processor Unit (NPU) product line today, aimed at AI inference applications. This NPU is a culmination of the open-source NVIDIA’s Deep Learning Accelerator (Open NVDLA) tech, supplemented with Neurxcore’s patented designs. The SNVDLA IP series by Neurxcore promises unparalleled energy efficiency, performance, and capability, predominantly targeting image processing tasks.

Redefining Standards in AI Processing

Neurxcore’s new product line significantly emphasizes image classification and object detection. Beyond that, its versatility extends to generative AI applications, setting it apart from competitors. The product line has already made its mark, having been tested and confirmed on the 22nm TSMC platform. The live demonstrations further showcased its potential, running diverse applications seamlessly.

In tandem with the hardware, Neurxcore has rolled out the Heracium SDK. Built on the robust open-source Apache TVM framework, this SDK facilitates seamless configuration, optimization, and compilation of neural network applications on SNVDLA devices.

Wide-Ranging Applicability

The SNVDLA’s applications are varied, serving sectors from wearables and smartphones to smart TVs, surveillance, robotics, AR/VR, ADAS, edge computing, and even servers. Neurxcore’s vision is evident; they aim to revolutionize industries by catering to both low-power and high-performance scenarios.

To cater to evolving industry needs, Neurxcore also provides a comprehensive suite that fosters the creation of tailor-made NPU solutions, offering optimized subsystem design, training, quantization, and AI-enhanced model development.

CEO’s Insights

Neurxcore’s CEO, Virgile Javerliac, highlighted the prominence of AI inference, remarking, “80% of AI computational tasks revolve around inference. Striking a balance between energy conservation, cost-efficiency, and performance is paramount.” He lauded his team’s efforts in bringing this cutting-edge product to life and reaffirmed Neurxcore’s dedication to customer service and forging collaborative partnerships.

Redefining Inference in AI

The product line showcases significant enhancements in terms of energy efficiency and performance compared to its NVIDIA predecessor. Its distinctive features, like tunable capabilities for the number of cores and MAC operations, make it highly adaptable across various markets. Furthermore, Neurxcore’s competitive pricing strategy and open-source software approach, powered by Apache TVM, ensure that AI solutions are both affordable and adaptable.

The Future of AI Semiconductors

A recent report by Gartner titled “Forecast: AI Semiconductors, Worldwide, 2021-2027” emphasized the increasing need for optimized semiconductor devices for AI techniques across data centers, edge computing, and devices. The forecast suggests that the AI semiconductor revenue might soar to $111.6 billion by 2027, registering a CAGR of 20% over five years.

Neurxcore, with its trailblazing SNVDLA product line, is set to make significant waves in the AI semiconductor industry, marking a pivotal shift in how AI inference is approached and executed.

What is Neural Processor Unit

A Neural Processing Unit (NPU) is a type of microprocessor specifically designed to accelerate the computations needed for large scale Artificial Intelligence (AI) and neural network functions. Unlike general-purpose processors, NPUs are optimized for the high-volume matrix and vector operations that form the basis of neural network and deep learning algorithms. Here are some key points about NPUs:

  1. Optimized for Deep Learning: NPUs are tailored to handle the unique computational patterns and structures of deep learning algorithms, especially matrix multiplication, which is a cornerstone of many AI computations.
  2. Efficiency and Speed: NPUs can greatly accelerate AI tasks by offloading them from traditional CPUs or GPUs. This offloading improves power efficiency and overall performance when processing neural network tasks.
  3. Integrated with Other Systems: Many modern systems-on-chips (SoCs) integrate NPUs alongside other processing units like CPUs and GPUs. This integration allows devices, especially mobile or edge devices, to run AI tasks locally without needing to connect to a larger server.
  4. Customizable: Some NPUs are designed to be customizable to specific tasks, making them even more efficient for particular AI applications.
  5. Evolving Landscape: The world of AI hardware is rapidly evolving. As deep learning models and algorithms change and grow, so too do the architectures of NPUs. This is an area of significant research, development, and investment.

Major tech companies, such as Apple, Google, Huawei, and others, have started integrating NPUs into their hardware products, particularly smartphones, to facilitate faster and more efficient AI computations directly on the device.

Michal Pukala
Electronics and Telecommunications engineer with Electro-energetics Master degree graduation. Lightning designer experienced engineer. Currently working in IT industry.

News