LLM function calls don't scale; code orchestration is simpler, more effective.
20 May, 2025
TL;DR: Giving LLMs the full output of tool calls is costly and slow. Output schemas will enable us to get structured data, so we can let the LLM orchestrate processing with generated code. Tool calling in code is simplifying and effective.
One common practice for working with MCP tools calls is to put the outputs from a tool back into the LLM as a message, and ask the LLM for the next step. The hope here is that the model figures out how to interpret the data, and identify the corre...
Read more at jngiam.bearblog.dev