RT @HPC_Guru
With BF16, @nvidia H100s are both faster and more cost effective than A100s for #LLM training
With FP8, H100s were 30% more cost-effective and 3x faster than A100s for a 7 billion parameter MosaicGPT model
#AI #GPU via @MosaicML @ProfMatsuoka https://twitter.com/MosaicML/status/1651684626207784960