As a way of improving its hybrid and multi-cloud strategies, Santa Clara, CA-based semiconductor company Advanced Micro Devices (AMD) recently inked a technical agreement with Google Cloud in which the former will run electronic design automation (EDA) on its cloud-based chip-design workloads.
In doing so, AMD will be able to extend the on-site capabilities of its data centers by leveraging Google Cloud’s capabilities in terms of artificial intelligence, global networking, machine learning, and storage.
The way chip design is done these days, developers and tech specialists need to take elasticity, scale, and the efficient use of resources into consideration, particularly in light of how each node advancement leads to an increased demand for compute processing.
The addition of Google Cloud’s latest compute-optimized C2D M instance, a development powered by third-generation AMD EPYC processors, to AMD’s roster of EDA workload resources will make chip-design workloads easily scalable and stay flexible.
Through the partnership with Google Cloud, AMD expects to run more designs in parallel which will give it more flexibility for managing short-term compute demands. Google Cloud will also make it possible to do this without compromising the allocation for long-term projects.
Set to roll out over several years, the initiative will allow Google Cloud and AMD to work together on fresh and innovative endeavors. In doing so, AMD is set to benefit from the partnership in terms of improved design and operating capacity thanks of Google Cloud’s AI, machine learning tools, and technological frameworks; increased transparency when it comes to project expenditures and monitoring the consumption of resources; less vendor lock-in which will result in greater agility.
AMD’s corporate vice-president of silicon design engineering Mydung Pham says that being able to leverage Google Cloud’s C2D instances for the company’s complex EDA workloads has been a great help to their IT and engineering personnel. Google Cloud C2D enables AMD to become more flexible and its implementation has also opened doors to high-performance resources in order to find the perfect compute solution for more complex EDA workflows.
For his part, Google Cloud general manager and vice-president for infrastructure Sachin Gupta expressed pleasure at being able to provide AMD with the necessary infrastructure for its compute performance requirements, as well as equipping the company with several AI solutions that are helping it design more innovative technologies. For Gupta, Google Cloud’s partnership with AMD enables it to unlock the flexibility it needs through a good mix of speed, scale, and cloud security.