Follow

RT @HPC_Guru
With BF16, @nvidia H100s are both faster and more cost effective than A100s for training

With FP8, H100s were 30% more cost-effective and 3x faster than A100s for a 7 billion parameter MosaicGPT model

via @MosaicML @ProfMatsuoka twitter.com/MosaicML/status/16

Sign in to participate in the conversation
Mastodon Sorbonne Université

Serveur Mastodon de Sorbonne Université