A beginner's guide to deploying LLMs with AMD on Windows using PyTorch - AMD GPUOpen
If you’re interested in deploying advanced AI models on your local
hardware, leveraging a modern AMD GPU or APU can provide an efficient
and scalable solution. You don’t need dedicated AI infrastructure to
experiment with Large Language Model (LLMs); a capable Microsoft® Windows® PC with
PyTorch installed and equipped with a recent AMD graphics card is all you
need.
PyTorch for AMD on Windows and Linux is now available as a public
preview. You can
now use native PyTorch for AI inference on AMD R...
Read more at gpuopen.com