Qwen1.5-32B: Fitting the Capstone of the Qwen1.5 Language Model Series
GITHUB
HUGGING FACE
MODELSCOPE
DEMO
DISCORDIntroductionThe open-source community has long sought a model that strikes an ideal balance between performance, efficiency, and memory footprint. Despite the emergence of cutting-edge models like Qwen1.5-72B and DBRX, the models have faced persistent challenges such as large memory consumption, slow inference speed, and substantial finetuning costs.A growing consensus within the field now points to a model with approximately 30 billion parameters as th...
Read more at qwenlm.github.io