Many people look at Walrus and their first reaction is that its relationship with Sui is a bit tight. But this is not a flaw; rather, it is a clever design.



Sui itself is very aggressive in parallel execution. Once the object model is separated, independent objects are processed concurrently, and shared objects can achieve sub-second finality through the Mysticeti optimization scheme. What does this mean? It means that Walrus's metadata layer and coordination layer are all run on Sui, which will not become a bottleneck. In contrast, other storage chains have a serial consensus mechanism; uploading a large file requires waiting across the entire network, making user experience extremely frustrating.

The real innovation lies in slicing. Walrus uses erasure coding with very interesting parameter tuning—conservative yet flexible. Conservative means a low redundancy rate, starting from 1.5x to ensure high availability, and flexible because governance voting can adjust it up to 3x as needed. Why dare to increase redundancy? Because Sui's high throughput capability significantly reduces the cost of coordinating transactions.

The process is as follows: the user initiates a storage request, and the system slices the file into hundreds of pieces while generating erasure proofs. These proofs are verified in parallel on Sui, then instructions are distributed and broadcast concurrently to nodes. Nodes store the fragments upon receipt, reply with confirmations after completion, and the confirmation information is aggregated and recorded on-chain. The entire process completes in seconds.

What does seconds-level mean? For scenarios like GB or TB-scale AI dataset migrations, it allows full-speed progress without waiting for batch time windows. This is something centralized storage cannot achieve at all.

Another application scenario—real-time AI agent inference. The agent needs to dynamically fetch model weights and historical datasets for inference calculations. If storage latency is high, the entire inference loop stalls. On Walrus, hot data is automatically cached with multiple replicas, parallel read paths are maximized, and Sui's object model allows concurrent coordination between cache replicas. For applications with high real-time requirements, this is a true performance breakthrough.
SUI-2,24%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 4
  • Repost
  • Share
Comment
0/400
WhaleMistakervip
· 12h ago
Millisecond-level storage is indeed impressive, but Sui's parallel design can handle it. The parameter tuning for the erasure coding setup is quite sophisticated.
View OriginalReply0
GateUser-addcaaf7vip
· 12h ago
Wow, Sui's parallel features have really been mastered. Walrus's erasure coding scheme completes in seconds, but it still can't match the efficiency of centralized storage.
View OriginalReply0
WenMoonvip
· 12h ago
Second-level storage? Is that for real? This is what a truly distributed system should look like.
View OriginalReply0
LidoStakeAddictvip
· 12h ago
The second-level storage is indeed excellent, and Sui's object model combined with erasure coding makes a great combo.
View OriginalReply0
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)