Alibaba’s new open source model QwQ-32B matches DeepSeek-R1 with way smaller compute requirements
March 5, 2025 3:06 PM
Credit: VentureBeat made with Midjourney
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Qwen Team, a division of Chinese e-commerce giant Alibaba developing its growing family of open-source Qwen large language models (LLMs), has introduced QwQ-32B, a new 32-billion-parameter reasoning model designed to improve performance on complex problem-solving tasks through reinforcement learning (RL).
The...
Read more at venturebeat.com