Amazon takes on Intel, AMD, & Nvidia with custom computing chips
- Competing with the long-running Intel vs AMD vs Nvidia central processors rivalry, Amazon came out with its 3rd-gen Graviton chip and Trn1
- AWS claims the new chipsets will help customers significantly improve performance, cost, and energy efficiency
When Amazon acquired Annapurna Labs in 2016, the tech giant was seriously evaluating to build custom chips for its cloud infrastructure services, to give it an edge against its arch-rivals, Microsoft and Google. Today, the company has two new custom chips that would give Intel Corp, Nvidia Corp and Advanced Micro Devices (AMD) a run for their money.
At its re:Invent 2021 conference yesterday, Amazon unveiled the Graviton3 and Trn1 chipsets. The former is basically the next generation of its custom ARM-based chip for artificial intelligence (AI) inferencing applications while Trn1 is a new instance for training deep learning models in the cloud.
Amazon Web Services (AWS) CEO Adam Selipsky said during his keynote address that his market-leading public cloud company is focused on “making the full power of machine learning available for all customers. Lowering the cost of training and inference are major steps of the journey.”
AWS vs Intel vs AMD vs Nvidia
The third generation of Graviton will soon be made available in AWC’s C7g instances, the company said, adding that the processors are optimized for workloads including high-performance computing, batch processing, media encoding, scientific modeling, ad serving, and distributed analytics.
Selipsky says that Graviton3 is up to 25% faster for general-computer workload and provides two times faster floating-point performance for scientific workloads, two times faster performance for cryptographic workloads, and three times faster performance for machine learning workloads versus Graviton2. To top it off, Graviton3 uses up to 60% less energy for the same performance compared with the previous generation, Selipsky claims.
Amazon’s vice president of Elastic Compute Cloud Dave Brown told Reuters that the company expects it to provide a better performance per dollar than Intel’s chips. To top it off, AWS still works closely with Intel, AMD, and Nvidia and Brown said AWS wants to keep the computing market competitive by offering an additional chip choice.
YOU MIGHT LIKE
Facebook is developing custom chips for its data centers
Graviton3 and Trn1
According to Selipsky, Trn1 will be Amazon’s instance for machine learning training, delivering up to 800Gbps of networking and bandwidth, making it well-suited for large-scale, multi-node distributed training use cases. Customers can leverage up to tens of thousands of clusters of Trn1 instances for training models containing upwards of trillions of parameters.
As per reports online, Trn1 supports popular frameworks including Google’s TensorFlow, Facebook’s PyTorch, and MxNet and uses the same Neuron SDK as Inferentia, the company’s cloud-hosted chip for machine learning inference. Amazon is quoting 30% higher throughput and 45% lower cost-per-inference compared with the standard AWS GPU instances.