ShiftDelete.Net Global

GPUs in Revolt! Meta Llama 3 Language Model Disappoints Users

Ana sayfa / News

According to Meta’s new research report, the cluster of 16,384 NVIDIA H100 GPUs used to train the 405-billion-parameter Llama 3 model has been a complete headache. It malfunctioned 419 times in just 54 days. This means an average failure every three hours.

The scale and synchronization of tasks in the Llama 3 language model system are so delicate that even a single GPU failure halts the entire training process, requiring it to start over. According to the Meta team’s report, of these 419 failures, 148 (30.1%) were due to various GPU issues, and 72 (17.2%) were caused by the GPU’s high-bandwidth memory (HBM3). Incredibly, there were only two CPU failures in 54 days. The remaining unexpected interruptions, 41.3%, were due to software bugs, network cables, and adapter issues.

The Meta team developed a fantastic array of tools and strategies to navigate this chaos. They implemented steps like reducing job launch and checkpoint times, using PyTorch’s NCCL flight recorder to diagnose performance issues, and identifying faulty GPUs. They also considered the impact of environmental factors, such as the effect of midday temperature fluctuations on GPU performance and the strain on the data center’s electrical grid from running a large number of GPUs simultaneously.

Meta faces $13 billion fine!

Facebook owner Meta could face billions of dollars in fines. EU Commission investigates some of Meta's behavior...

As the parameter count of AI models like the 405-billion-parameter Meta Llama 3 increases, these types of massive training clusters will become more common. For example, the xAI plan, which includes a cluster of 100,000 H100 graphics cards, suggests that more challenges may arise in future AI training. Therefore, Meta’s efforts to solve these issues now are critical for larger-scale projects in the future.

Meta managed to achieve an effective training time of over 90%. However, without these failures, it could have been much more efficient. These experiences will help Meta develop more robust and resilient systems for future projects.

What do you think? Share your thoughts in the comments section below.

Yorum Ekleyin