News Score: Score the News, Sort the News, Rewrite the Headlines

Writing Speed-of-Light Flash Attention for 5090 in CUDA C++

Aug 23, 2025In this post, I will walkthrough how I learned to implement Flash Attention for 5090 in CUDA C++. The main objective is to learn writing attention in CUDA C++, since many features are not available in Triton, such as MXFP8 / NVFP4 MMA for sm120. I also feel this is a natural next step after learning about matmul kernels. Lastly, there are many excellent blogposts on writing fast matmul kernels, but there is none for attention. So I want to take this chance to write up something nicel...

Read more at gau-nernst.github.io

© News Score  score the news, sort the news, rewrite the headlines