The Ultra Ethernet Consortium (UEC) has gained 45 new members in about five months, bringing the total to 55 companies and 715 industry experts. The consortium aims to refine Ethernet technology for high-end AI and HPC clusters by making targeted improvements to hardware and software while maintaining cost-efficiency and interoperability. Giants such as Baidu, Huawei, Nokia, Lenovo, Supermicro, and Tencent have joined the consortium [a2baec83].
UEC plans to release version 1.0 of the technology publicly in the third quarter of 2024. The consortium is backed by founding members including AMD, Arista, Broadcom, Cisco, Eviden, HPE, Intel, Meta, and Nokia. However, it has yet to gain support from industry giants such as AWS, Google, and Nvidia [a2baec83].
The Ultra Ethernet technology aims to increase the performance of AI and HPC workloads by upgrading RDMA operation over Ethernet, introducing innovations such as multi-path packet spraying, adaptable ordering, and congestion control mechanisms [a2baec83].
In a separate development, tech giants including Google, Microsoft, Intel, and others have formed the Ultra Accelerator Link (UALink) Promoter Group to develop components that link together AI in data centers, aiming to tackle the market dominance of chip-making giant Nvidia. The group also includes Hewlett Packard Enterprise, Cisco, and Broadcom [b39f4837].
Apple has reportedly signed a billion-dollar strategic partnership with OpenAI to incorporate its technology into Apple's iOS 18 operating system for iPhones, making Siri more intelligent and human-like. The deal is expected to be announced at Apple's Worldwide Developers Conference in June. OpenAI is also setting up a new committee on AI safety and security [b39f4837].
In other news, OpenAI is training a new flagship model to succeed GPT-4 and has introduced ChatGPT Edu for universities. Google is putting limits on its AI search experience feature AI Overviews after reports of hallucinations. French startup Mistral has released Codestral, a generative AI model for coding, trained on over 80 programming languages [b39f4837].
Leonardo AI, a generative AI platform that allows users to create images through text prompts, has announced a partnership with IO.net, a decentralized GPU network. Under the agreement, IO.net will provide Leonardo AI with 24xA100s from May 1st to August 31st. Leonardo AI, which has over 16 million users and is being used across various creative industries, will evaluate a longer-term A100 contract with IO.net and intends to procure 80xL40s from IO.net for an annual commitment. IO.net's decentralized approach to cloud computing offers enhanced control and flexibility, reducing users' cloud AI expenses by up to 90%. The partnership aims to enhance performance and accelerate image production on the Leonardo AI platform. The CEO of IO.net expressed enthusiasm about the partnership and their commitment to powering innovative applications in AI and graphics processing. The partnership was announced on June 5, 2024 [795b8ca9].
Microsoft, Meta, Google, and other technology vendors have launched the Ultra Accelerator Link (UALink), a new industry standard for connectivity in data centers. UALink is designed to improve performance and deployment flexibility in AI computing clusters housed in data centers. The standard applies to accelerators found on GPUs, enabling efficient interconnection of AI training and inference workloads. Version 1.0 of the standard will allow data center operators to connect up to 1,024 accelerators in a single computing pod. AMD, Broadcom, Cisco, Intel, and HPE have also signed on to form the open industry standard. The UALink standard will enable data centers to add computing resources to a single instance, allowing them to scale capacity on demand without disrupting ongoing workloads. The companies involved are members of the Ultra Ethernet Consortium (UEC), an industry group supporting cooperation around high-performance interconnects. Nvidia, which uses its own NVLink to interconnect GPUs, did not participate in the standard [0d5b3e71].