Back to Blog
AI & Agents

LLM Tool Use Patterns: Giving AI Agents Superpowers

Deep dive into how AI agents use tools — designing tool schemas, handling errors, and chaining tool calls for complex workflows.

Amit ShrivastavaApril 13, 20268 min read

LLM Tool Use Patterns: Giving AI Agents Superpowers

As a Senior Software Engineer with over a decade of experience spanning frontend, Web3, and now leading the charge in AI development, I've seen a lot of technological shifts. But few are as fundamentally paradigm-altering as the rise of Large Language Models (LLMs) and, more specifically, their incredible ability to use tools. It's not just about an LLM chatting with you anymore; it's about an LLM doing things in the real or digital world. This concept of "tool use" is what truly elevates an LLM from a sophisticated chatbot to a genuine AI agent.

Let's dive deep into the practicalities of designing, implementing, and orchestrating tool use, providing your AI agents with superpowers that go far beyond their inherent language capabilities.

Understanding the Core: What is Tool Use?

At its heart, tool use is about allowing an LLM to interact with external systems or data sources. Think of an LLM as a brilliant, multilingual brain that's stuck in a room. Tools are the doors, windows, and instruments it can use to perceive and affect the outside world. This could be anything from calling an API to check the weather, sending an email, querying a database, or even executing complex mathematical calculations.

The magic happens when the LLM, based on your prompt, determines that it needs a specific piece of information or needs to perform an action that it cannot inherently do. It then "decides" to use a tool, formulates the input for that tool, and then interprets the tool's output to continue its task or generate a response.

Designing Robust Tool Schemas

The foundation of effective tool use lies in well-defined tool schemas. This is how you "teach" the LLM what tools are available, what they do, and what inputs they expect. Most LLM frameworks (like LangChain, LlamaIndex, or even raw OpenAI function calling) rely on a JSON Schema-like structure for defining tools.

Let's imagine we want to create a tool to fetch the current stock price of a company.

// Example Tool Schema Definition (simplified for illustration)
const getStockPriceTool = {
  type: "function",
  function: {
    name: "getStockPrice",
    description: "Fetches the current stock price for a given ticker symbol.",
    parameters: {
      type: "object",
      properties: {
        tickerSymbol: {
          type: "string",
          description: "The stock ticker symbol (e.g., 'AAPL' for Apple)."
        },
      },
      required: ["tickerSymbol"],
    },
  },
};

// In a real application, you'd register this with your LLM framework // Example using OpenAI's function calling API const tools = [getStockPriceTool, / ...other tools /]; // Then pass 'tools' to your chat completion request

Key Considerations for Schema Design:

  • Descriptive Names & Descriptions: The LLM relies heavily on these. Be clear, concise, and explicit about what the tool does.
  • Precise Parameter Types: Use standard JSON Schema types (string, number, boolean, array, object).
  • Clear Parameter Descriptions: Explain what each parameter represents and what kind of values it expects.
  • Required Fields: Explicitly mark parameters that are mandatory.
  • Enums for Constrained Values: If a parameter can only take a few predefined values (e.g., unit: 'celsius' | 'fahrenheit'), use an enum in your schema. This guides the LLM significantly.

Implementing the Tool Execution Logic

Once you've defined your tool schemas, you need the actual code that executes when the LLM decides to use a tool. This is where you connect to your external APIs, databases, or services.

// Example Tool Implementation
async function getStockPrice(tickerSymbol: string): Promise {
  try {
    // In a real scenario, this would call an external API
    console.log(Fetching stock price for ${tickerSymbol}...);
    const response = await fetch(https://api.stocks.example.com/price?symbol=${tickerSymbol});
    if (!response.ok) {
      throw new Error(Failed to fetch stock price for ${tickerSymbol}: ${response.statusText});
    }
    const data = await response.json();
    return JSON.stringify({
      tickerSymbol: data.symbol,
      price: data.currentPrice,
      currency: data.currency
    });
  } catch (error: any) {
    console.error(Error fetching stock price: ${error.message});
    // IMPORTANT: Return an error message the LLM can understand and act upon
    return JSON.stringify({ error: Could not retrieve stock price for ${tickerSymbol}. Reason: ${error.message} });
  }
}

// A mapping for your agent to look up functions const availableTools: { [key: string]: Function } = { getStockPrice: getStockPrice, // ... other tool functions };

// Simplified orchestrator logic for demonstration // In a real system, your LLM framework handles this. async function executeToolCall(toolCall: any): Promise { const func = availableTools[toolCall.function.name]; if (func) { const args = JSON.parse(toolCall.function.arguments); return await func(...Object.values(args)); } throw new Error(Tool ${toolCall.function.name} not found.); }

Handling Errors Gracefully

This is absolutely critical. An LLM's tool-using capabilities can quickly become a frustrated user experience if tools fail silently or return unparsable errors.

  • Structured Error Responses: Design your tool implementations to return structured error messages (e.g., JSON objects with error fields). The LLM is excellent at parsing these and can then apologize to the user, ask for clarification, or suggest alternative actions.
  • Retry Mechanisms: For transient network errors, consider implementing retries in your tool functions.
  • User-Friendly Messages: Translate technical errors into explanations a non-technical user can understand. "The stock API is currently unreachable" is better than "HTTP 503 Service Unavailable."
  • LLM's Role in Error Handling: Explicitly prompt the LLM on how to handle tool errors. For example: "If a tool call fails, inform the user about the failure and suggest alternative approaches if possible."

Chaining Tool Calls for Complex Workflows

The real power emerges when an LLM can perform multiple tool calls in sequence, often using the output of one tool as input for the next. This creates sophisticated multi-step workflows.

Consider a request like: "What's the weather like in New York, and what stocks are performing well today related to renewable energy?"

This would likely involve:

  1. Tool Call 1: getCoordinates(location: "New York")
  • Output: { latitude: 40.7128, longitude: -74.0060 }
  1. Tool Call 2: getWeather(latitude: 40.7128, longitude: -74.0060) (using output from Tool Call 1)
  • Output: { temperature: 20, conditions: "Clear" }
  1. Tool Call 3: searchStocks(query: "renewable energy", sort: "top_gainers")
  • Output: [{ symbol: "TSLA", ... }, { symbol: "ENPH", ... }]
  1. Final LLM Response: Consolidating all tool outputs into a coherent answer.

To enable this, your agent architecture needs to:

  • Maintain Conversation History: The LLM needs access to previous turns, including tool calls and their outputs, to make informed decisions for subsequent calls.
  • Iterative Prompting: The core loop of an agent often involves:
  1. User Query -> LLM (thinks)
  2. LLM decides to call Tool A -> Execute Tool A
  3. Tool A's Output -> LLM (thinks again, maybe calls Tool B)
  4. ...until the task is complete or new information is needed.

Beyond Basics: Advanced Patterns

  • Dynamic Tool Creation: Imagine an agent that can not only use tools but also generate new, simple tools on the fly based on user needs (e.g., a regex pattern generator tool). This is more experimental but incredibly powerful.
  • Tool Reasoning: Instead of just calling a tool, the LLM provides its reasoning for needing that tool call. This helps in debugging and understanding the agent's decision-making process. Many frameworks now support this by asking the LLM to output a thought before a tool_call.
  • User Confirmation for Destructive Actions: For actions that modify external systems (e.g., "send email," "delete file"), it's crucial to prompt the user for confirmation before executing the tool. The LLM can generate the confirmation prompt.
  • Contextual Tool Selection: Not all tools should be available all the time. Depending on the conversation's context, you might dynamically enable or disable certain tools to reduce the LLM's search space and improve relevance.

Putting it All Together: The Agent's Loop

The typical agent loop with tool use looks something like this:

  1. User Input: The user provides a prompt.
  2. LLM Decision: The LLM receives the prompt and its internal prompt (system message) which includes descriptions of available tools. It decides:
  • To respond directly (no tool needed).
  • To call one or more tools.
  1. Tool Call (if applicable): If a tool is called, the LLM outputs the tool name and its arguments.
  2. Tool Execution: Your application intercepts this, executes the actual tool function, and captures its output (or error).
  3. Observation & Reinforcement: The tool's output is then fed back to the LLM as part of the conversation history, allowing it to "observe" the result.
  4. Repeat or Final Response: The LLM, based on the tool's output, decides if more tools are needed or if it can now generate a final response to the user.

This iterative process is what makes LLM agents so dynamic and capable.

The Future is Interconnected

From my perspective, LLMs using tools effectively is not just an enhancement; it's the core differentiator for truly intelligent agents. It's the bridge between raw language understanding and actionable intelligence. As engineers, our role is to sculpt these bridges with robust schemas, fault-tolerant implementations, and thoughtful orchestration. The ability of LLMs to dynamically interact with and leverage external systems marks a significant leap towards more autonomous and capable AI.

Want to chat more about building smart agents or share your own experiences with LLM tool use? Connect with me on LinkedIn or X (formerly Twitter) @YourXHandle! Let's build the future together.

Tool Use
AI Agents
Function Calling
LLMs