Ollama is now powered by MLX on Apple Silicon in preview · Ollama Blog
Today, we’re previewing the fastest way to run Ollama on Apple silicon, powered by MLX, Apple’s machine learning framework.
This unlocks new performance to accelerate your most demanding work on macOS:
Personal assistants like OpenClaw
Coding agents like Claude Code, OpenCode, or Codex
Accelerate coding agents like Pi or Claude Code
OpenClaw now responds much faster
Fastest performance on Apple silicon, powered by MLX
Ollama on Apple silicon is now built on top of Apple’s machine learning framew...
Read more at ollama.com