Source: CryptoNewsNet
Original Title: Bitcoin miners chase AI demand as Nvidia says Rubin is already in production
Original Link:
Nvidia CEO Jensen Huang announced that the company’s next-generation Vera Rubin platform is already in “full production,” revealing additional details about hardware designed to deliver five times the artificial-intelligence computing capacity of Nvidia’s previous systems.
Rubin is expected to arrive later this year and targets the fastest-growing segment of the AI business, specifically helping to deliver outputs from trained models.
According to Huang, Rubin’s flagship server will feature 72 of Nvidia’s graphics processing units and 36 central processors, capable of being linked into larger “pods” containing more than 1,000 Rubin chips.
A significant focus was placed on efficiency. Huang highlighted that Rubin systems could improve the efficiency of generating AI “tokens”—the basic units produced by language models—by approximately 10 times, enabled by a proprietary data type the company aims to have the broader industry adopt. Notably, this performance improvement is achieved with only a 1.6-times increase in transistor count.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Nvidia Unveils Vera Rubin Platform in Full Production with 5x AI Computing Performance
Source: CryptoNewsNet Original Title: Bitcoin miners chase AI demand as Nvidia says Rubin is already in production Original Link: Nvidia CEO Jensen Huang announced that the company’s next-generation Vera Rubin platform is already in “full production,” revealing additional details about hardware designed to deliver five times the artificial-intelligence computing capacity of Nvidia’s previous systems.
Rubin is expected to arrive later this year and targets the fastest-growing segment of the AI business, specifically helping to deliver outputs from trained models.
According to Huang, Rubin’s flagship server will feature 72 of Nvidia’s graphics processing units and 36 central processors, capable of being linked into larger “pods” containing more than 1,000 Rubin chips.
A significant focus was placed on efficiency. Huang highlighted that Rubin systems could improve the efficiency of generating AI “tokens”—the basic units produced by language models—by approximately 10 times, enabled by a proprietary data type the company aims to have the broader industry adopt. Notably, this performance improvement is achieved with only a 1.6-times increase in transistor count.