Synopsys, Inc. has unveiled groundbreaking Ultra Ethernet IP and UALink IP solutions to address the rising demand for high-speed, standards-based interconnects in AI and high-performance computing (HPC). These solutions are designed to enable hyperscale data centers to manage trillions of parameters in AI models, delivering low-latency communication and seamless scalability.
1. Key Features of Synopsys Ultra Ethernet IP
- Scalability: Supports up to one million endpoints with a combination of PHY, MAC, PCS controller, and verification IP.
- Bandwidth Optimization: Delivers up to 1.6 Tbps bandwidth using patented error correction.
- Interoperability: Broad compatibility demonstrated in tradeshows like ECOC and OFC.
- Seamless Integration: Interfaces with higher layers of the Ethernet stack for AI accelerators and smart NICs.
- Verification Efficiency: Ensures protocol adherence and rapid validation.
2. Advancements with Synopsys UALink IP
- Scaling Compute Fabrics: Supports up to 1,024 AI accelerators.
- High-Speed Data Transfer: Provides 200 Gbps per lane for data-intensive workloads.
- Latency Optimization: Mitigates bottlenecks through memory sharing between accelerators.
- Robust Verification: Built-in protocol checks enhance reliability.
3. Industry Collaboration and Adoption
- Juniper Networks: Leverages Synopsys Ethernet IP to introduce 800GbE capabilities, moving toward the 1.6TbE era.
- AMD: Combines Synopsys IP with high-performance processors to create scalable ecosystems.
- Astera Labs: Highlights the need for scalable, power-efficient interconnects for AI workloads.
- Tenstorrent: Utilizes Synopsys IP for efficient RISC-V chip interconnects supporting multi-trillion parameter models.
- XConn: Collaborates with Synopsys to develop high-performance networking systems for AI architectures.
The launch of Synopsys’ Ultra Ethernet and UALink IP solutions marks a transformative step in enabling scalable, high-performance AI and HPC systems. These innovations, backed by industry partnerships, promise to address critical challenges in latency, bandwidth, and interoperability, setting the stage for the next generation of AI-powered infrastructure.