Andy Peatling

Architecting AI Agents with TypeScript

10 min read

If you’ve been working with large language models (LLMs) for some time you’ve probably noticed how quickly we’re moving beyond simple chat interfaces. The real power comes when you can orchestrate LLMs to work with external tools, maintain context, and perform complex tasks. I’ve been building these kinds of AI agents for a while now, and wanted to share a thoughtful approach to their architecture.

In this post, I’ll walk you through how to build a flexible, maintainable AI agent system in TypeScript using functional programming patterns. This isn’t a theoretical post, I’ll provide complete code examples you can adapt for your own projects.

What Makes an Agent?

First, what separates an agent from a simple chatbot? While chatbots respond to messages in isolation, agents do considerably more. They understand user inputs and maintain context across interactions. They can process information and decide what actions to take. They interact with the real world through APIs, databases, and other tools. And the best ones can improve over time through feedback or memory.

Creating this kind of system requires thoughtful architecture to manage all the moving parts.

The Core Architecture

After building several agent systems, I’ve found a modular approach works best. Your architecture needs four essential components working together.

The Agent Orchestrator serves as your central coordinator. It manages the flow of information between the user, the language model, and external tools. It forwards user messages to the LLM, parses and executes tool calls from the LLM, sends results back to the LLM, and streams responses to the user.

Your Language Model Interface provides a standardized way to work with different LLM providers. It abstracts away differences between APIs (OpenAI, Anthropic, etc.) by converting your messages to provider-specific formats, handling streaming responses consistently, normalizing tool calling formats, and managing authentication and rate limiting.

The Tool Executor handles all interactions with external systems. It maintains a registry of available tools, validates input parameters against schemas, executes the appropriate functions, returns results in a consistent format, and handles errors gracefully.

While optional for simple agents, a Memory System enhances capabilities by storing conversation history, maintaining state between interactions, and potentially embedding knowledge for retrieval.

The Agent Loop

These components work together in a continuous cycle that enables complex multi-step reasoning and actions. The user sends a query, which the Orchestrator receives and stores in memory. The Orchestrator then passes the query and context to the LLM, which generates a response (either text, tool calls, or both).

When tool calls are included, the Orchestrator passes them to the Tool Executor, which validates and executes them. The results go back to the Orchestrator, which forwards them to the LLM for further processing. This might loop several times until the LLM has all the information it needs. Finally, the text response streams to the user, and the conversation state updates in memory.

Let’s see how to implement this flow in TypeScript.

TypeScript Implementation

I’ll use a functional programming style here since I’ve found it creates more composable, testable systems than class-heavy approaches. Let’s start with our core types:

// Message types that can be exchanged in the system
type Role = 'system' | 'user' | 'assistant' | 'tool';

type Message = {
  role: Role;
  content: string;
  name?: string; // For tool messages, identifies the tool
  toolCallId?: string; // Links tool responses to their requests
};

// Tool-related types
type ToolCall = {
  id: string;
  name: string;
  arguments: Record<string, unknown>;
};

type ToolResult = {
  result: unknown;
  error?: string;
};

// LLM Response - either text chunks or tool call requests
type LLMResponse = string | ToolCall;
TypeScript

The Language Model Interface

Next, we need an interface to abstract different LLM providers:

interface LanguageModelInterface {
  generateResponse(
    messages: Message[],
    options?: {
      temperature?: number;
      maxTokens?: number;
      toolChoice?: 'auto' | 'none' | { name: string };
    }
  ): AsyncGenerator<LLMResponse>;
}

// Configuration for different providers
type LLMConfig = {
  provider: 'openai' | 'anthropic' | 'other';
  model: string;
  apiKey: string;
  // Other provider-specific settings
};

// Factory function to create the appropriate provider
const createLanguageModelInterface = (config: LLMConfig): LanguageModelInterface => {
  switch (config.provider) {
    case 'openai':
      return createOpenAIProvider(config);
    case 'anthropic':
      return createAnthropicProvider(config);
    // Other providers...
    default:
      throw new Error(`Unsupported provider: ${config.provider}`);
  }
};
TypeScript

I’ve used a factory function pattern here because it makes it easy to add new providers later without changing other parts of the code. I’ll be writing a follow up blog post with more details on the language model interface.

The Tool Executor

Let’s implement our tool executor. This handles discovering, validating, and executing tools:

// Tool definition with JSON Schema for parameters
type Tool = {
  name: string;
  description: string;
  parameters: object; // JSON Schema
  execute: (args: Record<string, unknown>) => Promise<unknown>;
};

interface ToolExecutor {
  listTools(): Tool[];
  executeTool(name: string, args: Record<string, unknown>): Promise<ToolResult>;
}

// Factory function for the tool executor
const createToolExecutor = (tools: Tool[]): ToolExecutor => {
  // Create a map for faster lookup
  const toolMap = new Map(tools.map((tool) => [tool.name, tool]));

  // Instantiate JSON schema validator
  const ajv = new Ajv();

  return {
    listTools: () => [...tools],

    executeTool: async (name, args) => {
      const tool = toolMap.get(name);

      if (!tool) {
        return {
          result: null,
          error: `Tool "${name}" not found`,
        };
      }

      // Validate arguments against schema
      const validate = ajv.compile(tool.parameters);
      if (!validate(args)) {
        return {
          result: null,
          error: `Invalid arguments: ${ajv.errorsText(validate.errors)}`,
        };
      }

      try {
        const result = await tool.execute(args);
        return { result };
      } catch (error) {
        return {
          result: null,
          error: `Execution error: ${error.message}`,
        };
      }
    },
  };
};
TypeScript

Notice how we’re using JSON Schema for parameter validation – this is super helpful for catching errors before they happen and providing clear error messages.

Memory System

Now for a simple memory implementation:

interface MemorySystem {
  getMessages(): Message[];
  addMessage(message: Message): void;
  clear(): void;
}

const createMemorySystem = (): MemorySystem => {
  let messages: Message[] = [];

  return {
    getMessages: () => [...messages], // Return a copy to prevent mutation
    addMessage: (message) => {
      messages = [...messages, message];
    },
    clear: () => {
      messages = [];
    },
  };
};
TypeScript

This is a basic in-memory implementation, but you could easily extend this to persist to a database or vector store. For a production level system with retention of messages over multiple sessions, you’ll need a persistent store.

The Agent Orchestrator

Finally, let’s tie everything together with our orchestrator:

type AgentDependencies = {
  llm: LanguageModelInterface;
  toolExecutor: ToolExecutor;
  memory: MemorySystem;
};

type Agent = {
  processQuery: (query: string) => AsyncGenerator<Message>;
};

const createAgent = (deps: AgentDependencies): Agent => {
  const { llm, toolExecutor, memory } = deps;

  // Create a system message describing available tools
  const createSystemPrompt = (): string => {
    const tools = toolExecutor.listTools();
    return `You are a helpful assistant that can use tools to accomplish tasks. Available tools:
${tools.map((tool) => `- ${tool.name}: ${tool.description}`).join('\n')}

When you need to use a tool, respond with a tool call.`;
  };

  const processQuery = async function* (query: string): AsyncGenerator<Message> {
    // Create user message and add to memory
    const userMessage: Message = { role: 'user', content: query };
    memory.addMessage(userMessage);

    // Add system prompt if not already present
    const messages = memory.getMessages();
    if (!messages.some((m) => m.role === 'system')) {
      const systemMessage: Message = {
        role: 'system',
        content: createSystemPrompt(),
      };
      memory.addMessage(systemMessage);
    }

    let assistantMessage: Message = { role: 'assistant', content: '' };
    const pendingToolCalls: ToolCall[] = [];

    // Get response from LLM
    const generator = llm.generateResponse(memory.getMessages());

    try {
      for await (const chunk of generator) {
        if (typeof chunk === 'string') {
          // Text response - append to assistant message
          assistantMessage.content += chunk;
          yield { ...assistantMessage }; // Yield a copy with current content
        } else {
          // Tool call - add to pending list
          pendingToolCalls.push(chunk);

          // Optionally yield a message indicating tool use
          yield {
            role: 'assistant',
            content: `Using tool: ${chunk.name} with arguments: ${JSON.stringify(chunk.arguments)}`,
          };
        }
      }

      // Add complete assistant message to memory
      memory.addMessage(assistantMessage);

      // Process any tool calls
      for (const toolCall of pendingToolCalls) {
        const { id, name, arguments: args } = toolCall;

        try {
          // Execute the tool
          const { result, error } = await toolExecutor.executeTool(name, args);

          // Create tool response message
          const content = error ? `Error: ${error}` : JSON.stringify(result);

          const toolResultMessage: Message = {
            role: 'tool',
            content,
            name,
            toolCallId: id,
          };

          // Add to memory and yield
          memory.addMessage(toolResultMessage);
          yield toolResultMessage;

          // If there were tool calls, get further LLM response with the results
          if (!error) {
            const followUpGenerator = llm.generateResponse(memory.getMessages());

            let followUpMessage: Message = { role: 'assistant', content: '' };

            for await (const chunk of followUpGenerator) {
              if (typeof chunk === 'string') {
                followUpMessage.content += chunk;
                yield { ...followUpMessage };
              } else {
                // Handle nested tool calls if needed
                // This would use recursion or iterate through more tool calls
              }
            }

            // Add follow-up response to memory
            memory.addMessage(followUpMessage);
          }
        } catch (error) {
          console.error(`Tool execution error:`, error);
          yield {
            role: 'tool',
            content: `Failed to execute tool ${name}: ${error.message}`,
            name,
            toolCallId: id,
          };
        }
      }
    } catch (error) {
      console.error('Error in agent processing:', error);
      yield {
        role: 'assistant',
        content: `I encountered an error: ${error.message}`,
      };
    }
  };

  return { processQuery };
};
TypeScript

There’s a lot going on here, but the core idea is that we process user queries by sending them to the LLM, getting either text or tool calls back, executing any tool calls, feeding results back to the LLM, and streaming the entire process to show progress.

Putting It All Together

Let’s see a complete example using some simple tools:

// Define some example tools
const calculatorTool: Tool = {
  name: 'calculator',
  description: 'Perform arithmetic calculations',
  parameters: {
    type: 'object',
    properties: {
      expression: {
        type: 'string',
        description: 'The math expression to evaluate (e.g., "2 + 2")',
      },
    },
    required: ['expression'],
  },
  execute: async ({ expression }) => {
    // Very simple evaluation - in production use a safer method!
    try {
      return { result: eval(expression as string) };
    } catch (e) {
      throw new Error(`Could not evaluate expression: ${e.message}`);
    }
  },
};

const weatherTool: Tool = {
  name: 'get_weather',
  description: 'Get current weather for a location',
  parameters: {
    type: 'object',
    properties: {
      location: {
        type: 'string',
        description: 'City name, e.g., "New York" or "London, UK"',
      },
      unit: {
        type: 'string',
        enum: ['celsius', 'fahrenheit'],
        default: 'celsius',
      },
    },
    required: ['location'],
  },
  execute: async ({ location, unit }) => {
    // In a real implementation, this would call a weather API
    console.log(`Getting weather for ${location} in ${unit}`);
    return {
      temperature: 22,
      unit: unit || 'celsius',
      condition: 'Sunny',
      location,
    };
  },
};

// Set up the agent with our components
const setupAgent = async () => {
  // Create the LLM interface
  const llm = createLanguageModelInterface({
    provider: 'openai',
    model: 'gpt-4o',
    apiKey: process.env.OPENAI_API_KEY || '',
  });

  // Create the tool executor with our tools
  const toolExecutor = createToolExecutor([calculatorTool, weatherTool]);

  // Create the memory system
  const memory = createMemorySystem();

  // Create the agent
  const agent = createAgent({ llm, toolExecutor, memory });

  return agent;
};

// Example usage in an application
const runAgentExample = async () => {
  const agent = await setupAgent();

  const query = "What's 135 * 28? And after that, what's the weather in Paris?";
  console.log(`User: ${query}`);

  // Process the query and stream responses
  for await (const message of agent.processQuery(query)) {
    if (message.role === 'assistant') {
      process.stdout.write(`Assistant: ${message.content}`);
    } else if (message.role === 'tool') {
      console.log(`\n[Tool ${message.name}]: ${message.content}`);
    }
  }
};

runAgentExample().catch(console.error);
TypeScript

Why This Architecture Works Well

This approach has several advantages I’ve discovered through trial and error. Each component has a single responsibility and can be swapped out independently. Pure functions and dependency injection make testing straightforward. It’s easy to switch LLM providers or add new tools. The user sees progress in real-time through streaming. And the functional approach with immutable data and composition over inheritance makes the code more predictable and maintainable.

The foundation we’ve built can be extended in various directions as your needs grow. Some ideas:

  • Incorporate permission systems and parameter validation for secure tool execution (an important one!)
  • Planning capabilities where agents break down complex tasks
  • Multi-agent cooperation for specialized subtasks
  • MCP support for third-party tooling
  • Implement vector storage for long-term memory
  • Add retry strategies and graceful degradation for error recovery

Wrapping Up

Building AI agents is a fascinating challenge that combines LLMs with practical software engineering. By separating concerns into discrete components we create a flexible system that can evolve with advances in AI.

The functional TypeScript approach shown here emphasizes type safety, composability, and testability. I’ve been using patterns like this in my own projects, and they’ve proven robust as LLMs and their capabilities continue to advance.


© 2025 Andy Peatling