GitHub - ai-dynamo/dynamo: A Datacenter Scale Distributed Inference Serving Framework
NVIDIA Dynamo
| Guides | Architecture and Features | APIs | SDK |
NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
Disaggregated prefill & decode inference – Maximizes GPU throughput and facilitates trade off between throughput and latency.
Dyn...
Read more at github.com