News Score: Score the News, Sort the News, Rewrite the Headlines

Meta’s Next Llama AI Models Are Training on a GPU Cluster ‘Bigger Than Anything’ Else

Managing such a gargantuan array of chips to develop Llama 4 is likely to present unique engineering challenges and require vast amounts of energy. Meta executives on Wednesday sidestepped an analyst question about energy access constraints in parts of the US that have hampered companies’ efforts to develop more powerful AI.According to one estimate, a cluster of 100,000 H100 chips would require 150 megawatts of power. The largest national lab supercomputer in the United States, El Capitan, by c...

Read more at wired.com

© News Score  score the news, sort the news, rewrite the headlines