/Arm’s Trillium offers industry’s most scalable, versatile ML compute platform

Arm’s Trillium offers industry’s most scalable, versatile ML compute platform

Trillium

A day ago, Arm announced Project Trillium, a suite of Arm IP including new highly scalable processors that will deliver enhanced machine learning (ML) and neural network (NN) functionality. The current technologies are focused on the mobile market and will enable a new class of ML-equipped devices with advanced compute capabilities, including state-of-the-art object detection.

“The rapid acceleration of artificial intelligence into edge devices is placing increased requirements for innovation to address compute while maintaining a power efficient footprint. To meet this demand, Arm is announcing its new ML platform, Project Trillium,” said Rene Haas, president, IP Products Group, Arm.

ML technologies today tend to focus on specific device classes or the needs of individual sectors. Arm’s Project Trillium changes that by offering ultimate scalability. While the initial launch focuses on mobile processors, future Arm ML products will deliver the ability to move up or down the performance curve – from sensors and smart speakers, to mobile, home entertainment, and beyond.

Performance

Arm’s new ML and object detection processors not only provide a massive efficiency uplift from standalone CPUs, GPUs and accelerators, but they far exceed traditional programmable logic from DSPs.

The Arm ML processor is built from the ground-up, specifically for ML. It is based on the highly scalable Arm ML architecture and achieves the highest performance and efficiency for ML applications:

  • For mobile computing, the processor delivers more than 4.6 trillion operations per second
  • (TOPs) with a further uplift of 2x-4x in effective throughput in real-world uses through intelligent data management
  • Unmatched performance in thermal and cost-constrained environments with an efficiency of over three trillion operations per second per watt (TOPs/W).

In combination, the Arm ML and OD processors perform even better, delivering a high-performance, power-efficient people detection and recognition solution. Users will enjoy high-resolution, real-time, detailed face recognition on their smart devices delivered in a battery-friendly way.

Arm NN software, when used alongside the Arm Compute Library and CMSIS-NN, is optimized for NNs and bridges the gap between NN frameworks such as TensorFlow, Caffe, and Android NN and the full range of Arm Cortex® CPUs, Arm Mali™ GPUs, and ML processors. Developers get the highest performance from ML applications by being able to fully-utilize underlying Arm hardware capabilities and performance. More details on Arm NN software are available on our website.

The new suite of Arm ML IP will be available for early preview in April of this year, with general availability in mid-2018.

My Mobile has always believed in well researched, in-depth, truthful and relevant content which has evolved with time and produced by the very best of well informed journalists.

Leave a reply

Your email address will not be published. Required fields are marked *