Magma: A Foundation Model for Multimodal AI Agents
Magma is the first foundation model that is capable of interpreting and grounding multimodal inputs within its environment. Given a described goal, Magma is able to formulate plans and execute actions to achieve it. By effectively transferring knowledge from freely available visual and language data, Magma bridges verbal, spatial and temporal intelligence to navigate complex tasks and settings.
Abstract
We present Magma, a foundation model serving multimodal AI agentic tasks in both the digital ...
Read more at microsoft.github.io