Cisco Launches New AI Networking Chips
Cisco Systems has launched networking chips for AI supercomputers, entering the competition with Broadcom and Marvell Technology. The new chips, part of Cisco’s SiliconOne series, are currently being tested by five major cloud providers, although their names were not disclosed. These cloud providers are likely to include industry leaders such as Amazon Web Services, Microsoft Azure, and Google Cloud.
The increasing popularity of AI applications, such as ChatGPT powered by graphics processing units (GPUs), has emphasized the importance of fast communication between individual chips. Cisco, a prominent supplier of networking equipment, including ethernet switches, has introduced the latest generation of its ethernet switches called G200 and G202. These switches offer double the performance compared to the previous generation and can connect up to 32,000 GPUs together.
According to Rakesh Chopra, a Cisco fellow and formerly principal engineer, the G200 and G202 chips are expected to be the most powerful networking chips in the market for AI and machine learning workloads. They aim to provide a more power-efficient network with 40% fewer switches and reduced lag.
Cisco’s entry into the AI supercomputer chip market puts it in direct competition with Broadcom, which announced the Jericho3-AI chip in April. The Jericho3-AI chip also supports the connection of up to 32,000 GPU chips.
Overall, Cisco’s networking chips target the growing demand for AI applications and aim to offer improved performance, efficiency, and connectivity for AI supercomputers.
The post Cisco Launches New AI Networking Chips appeared first on Analytics India Magazine.




