How ARM uses AWS for characterization turnaround time and cost reduction?

Shubhambhalala
4 min readSep 21, 2020

--

The arm is a leading technology provider of silicon intellectual property (IP) for intelligent systems-on-chip that power billions of devices. Arm creates IP used by technology partners to develop integrated semiconductor circuits. The company estimates that 70 percent of the world’s population uses its technology in their smart devices and electronics.

What problem does ARM face?

For many years, Arm relied on an on-premises environment to support electronic design automation (EDA) workloads, resulting in forecast challenges on the computing capacity. “The nature of our Physical Design Group business demands a high-dynamic compute environment, and the flexibility to make changes on short notice,” says Philippe Moyer, vice president of design enablement for the Arm Physical Design Group. “In the past, the on-premises compute was sometimes sitting idle until the need arose, which is why the scalability and agility of the cloud is a good solution for our business.”

What does EDA and EDA workload mean?

EDA stands for Electronic Design Automation. EDA represents software tools and workflows for designing semiconductor chips. Dozens of software tools used to design a chip from specification to fabrication. It needs, very computes heavy process, high concurrency. Storage is often the performance bottleneck.

Understanding what is EDA Workload:

Here, once the measure results come, accordingly we have to run this simulation again with changed data or more processing power hence need scaling and on the fly increment of resources.

The arm was looking for agility improvement to keep development on schedule. “With our on-premises environment, our data center was constrained in terms of scalability, and deployment of additional compute capacity would typically take one month for approvals and at least three months to procure and install hardware,” says Vicki Mitchell, vice president of systems engineering for Arm. “We have aggressive deadlines, and waiting that long could make or break a project for us.”

How ARM used AWS for their problem?

In 2017 Arm chose to move part of its EDA workload to Amazon Web Services (AWS) to attain agility and scalability. AWS made sense to use as AWS is a market leader, and it really understands the semiconductor space.

Initially, the Arm Physical Design Group ran its EDA workloads on Amazon Elastic Compute Cloud (Amazon EC2) Intel processor-based instances. It also used Amazon Simple Storage Service (Amazon S3), in combination with Amazon Elastic File System (Amazon EFS), for EDA data storage. When AWS announced the availability of Amazon EC2 A1 instances powered by Arm-based Graviton processors, the Arm Physical Design IP team began to run portions of its EDA workloads on A1 instances. Taking advantage of Graviton instances gives us the opportunity to contribute to the development of the EDA ecosystem on the Arm architecture. In addition, Arm uses Amazon EC2 Spot Instances for all workloads. Spot Instances are spare compute capacity available at up to 90 percent less than On-Demand Instances.

Time and cost reduction in Characterization Turnaround

By using AWS, the Arm Physical Design IP team can scale its EDA environment up or down quickly — from 5,000 cores to 30,000 cores — on demand. This scalability and flexibility brought by AWS translates to faster turnaround time. Using AWS, EDA workload characterization turnaround time was reduced from a few months to a few weeks.

Running its EDA workloads on Arm-based Graviton instances, Arm is lowering its AWS operational costs. The Graviton processor family enables us to reduce the AWS costs for our logic characterization workload by 30 percent per physical core versus using Intel-powered instances for the same throughput.

Future scope of ARM and AWS

Arm now plans to use the next generation of Amazon EC2 Arm instances, powered by Graviton2 processors with 64-bit Arm Neoverse cores. The Graviton2 offers even better performance and scalability and caters to a larger number of different EDA workloads. We are looking forward to using these AWS processors for better performance and additional cost savings.

Thanks for reading this article. Feel free to connect with me on LinkedIn and ask queries and give suggestions.

References

  1. https://aws.amazon.com/solutions/case-studies/arm-case-study/
  2. https://europepmc.org/article/med/17611182
  3. https://www.snia.org/sites/default/files/SDC/2016/presentations/performance/Principe_Bhadaliya_Introducing_EDA_Workload_SPEC_SFS_Benchmark_v2.pdf

--

--

Shubhambhalala

C|EH | Cybersecurity researcher | MLOps | Hybrid Multi Cloud | Devops assembly line | Openshift | AWS EKS | Docker