Overview
Modern AI applications need dynamic tool selection - the LLM should understand the context and choose appropriate tools automatically, rather than having hardcoded tool calls. Alloy MCP Server enables this pattern seamlessly.Integration with Vercel AI SDK
Basic Setup
Copy
Ask AI
import { openai } from '@ai-sdk/openai';
import { generateText, tool } from 'ai';
import { z } from 'zod';
import { AlloyMCPClient } from '@alloy/mcp-client';
// Initialize Alloy MCP client
const alloyClient = new AlloyMCPClient({
serverUrl: 'https://mcp.runalloy.com',
serverId: 'your-server-id',
accessToken: 'your-access-token'
});
// Get available tools from Alloy dynamically
const availableTools = await alloyClient.listTools();
// Convert Alloy tools to Vercel AI SDK format
const aiTools = availableTools.reduce((acc, alloyTool) => {
acc[alloyTool.name] = tool({
description: alloyTool.description,
parameters: z.object(alloyTool.inputSchema),
execute: async (params) => {
return await alloyClient.execute(alloyTool.name, params);
}
});
return acc;
}, {});
Dynamic Tool Discovery
Let the LLM discover and use tools based on user intent:Copy
Ask AI
async function handleUserQuery(userMessage: string) {
// First, let the LLM understand what tools are available
const toolDiscovery = await generateText({
model: openai('gpt-4'),
messages: [
{
role: 'system',
content: `You have access to various integration tools via Alloy MCP.
First, discover what tools are available for the user's request.`
},
{
role: 'user',
content: userMessage
}
],
tools: {
list_connectors_alloy: tool({
description: 'List available platform connectors',
parameters: z.object({ category: z.string().optional() }),
execute: async (params) => {
return await alloyClient.execute('list_connectors_alloy', params);
}
}),
get_connector_resources_alloy: tool({
description: 'Get available actions for a connector',
parameters: z.object({ connectorId: z.string() }),
execute: async (params) => {
return await alloyClient.execute('get_connector_resources_alloy', params);
}
})
}
});
// Now execute the actual task with discovered tools
const result = await generateText({
model: openai('gpt-4'),
messages: [
...toolDiscovery.messages,
{
role: 'assistant',
content: toolDiscovery.text
},
{
role: 'user',
content: 'Now execute the task'
}
],
tools: aiTools,
maxToolRoundtrips: 5
});
return result;
}
LangChain Integration
Dynamic Tool Loading
Copy
Ask AI
import { ChatOpenAI } from '@langchain/openai';
import { DynamicTool } from '@langchain/core/tools';
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
import { AlloyMCPClient } from '@alloy/mcp-client';
const llm = new ChatOpenAI({
modelName: "gpt-4",
temperature: 0
});
const alloyClient = new AlloyMCPClient({
serverUrl: 'https://mcp.runalloy.com',
serverId: 'your-server-id',
accessToken: 'your-access-token'
});
// Dynamically create tools from Alloy MCP
async function createDynamicTools() {
const tools = await alloyClient.listTools();
return tools.map(tool =>
new DynamicTool({
name: tool.name,
description: tool.description,
func: async (input) => {
const params = JSON.parse(input);
const result = await alloyClient.execute(tool.name, params);
return JSON.stringify(result);
}
})
);
}
// Create agent with dynamic tools
const tools = await createDynamicTools();
const agent = await createOpenAIFunctionsAgent({
llm,
tools,
prompt: `You are an AI assistant with access to various integration tools.
Analyze the user's request and use the appropriate tools to help them.
Always explain what you're doing and why.`
});
const executor = new AgentExecutor({
agent,
tools,
verbose: true
});
Common AI Patterns
1. Context-Aware Tool Selection
Let the LLM understand context and choose tools:Copy
Ask AI
async function contextAwareExecution(context: string, task: string) {
const response = await generateText({
model: openai('gpt-4'),
messages: [
{
role: 'system',
content: `Context: ${context}
You have access to integration tools.
Analyze what's needed and use appropriate tools.`
},
{
role: 'user',
content: task
}
],
tools: aiTools,
toolChoice: 'auto' // Let AI decide which tools to use
});
return response;
}
// Example usage
const result = await contextAwareExecution(
'User is managing an e-commerce store with Shopify and uses Slack for team communication',
'When a high-value order comes in, notify the team and create a shipping label'
);
2. Multi-Step Reasoning
Enable the LLM to plan and execute complex workflows:Copy
Ask AI
async function multiStepWorkflow(objective: string) {
const planner = await generateText({
model: openai('gpt-4'),
messages: [
{
role: 'system',
content: `You are a workflow planner.
Break down the objective into steps.
For each step, identify which tools to use.`
},
{
role: 'user',
content: objective
}
]
});
// Execute each step with appropriate tools
const steps = parsePlan(planner.text);
const results = [];
for (const step of steps) {
const stepResult = await generateText({
model: openai('gpt-4'),
messages: [
{
role: 'system',
content: `Execute this step: ${step.description}`
}
],
tools: aiTools,
toolChoice: 'required'
});
results.push(stepResult);
}
return results;
}
3. Adaptive Error Handling
Let the AI handle errors intelligently:Copy
Ask AI
async function executeWithErrorHandling(task: string) {
const maxRetries = 3;
let attempt = 0;
let lastError = null;
while (attempt < maxRetries) {
try {
const result = await generateText({
model: openai('gpt-4'),
messages: [
{
role: 'system',
content: `Execute the task. If you encountered an error previously: ${lastError}`
},
{
role: 'user',
content: task
}
],
tools: aiTools
});
return result;
} catch (error) {
lastError = error.message;
attempt++;
// Let AI decide how to handle the error
const errorHandler = await generateText({
model: openai('gpt-4'),
messages: [
{
role: 'system',
content: `An error occurred: ${error.message}
Suggest an alternative approach or fix.`
}
]
});
// Apply suggested fix or alternative approach
task = errorHandler.text;
}
}
}
Real-World Examples
Customer Support AI Agent
Copy
Ask AI
const supportAgent = {
async handleTicket(ticketContent: string) {
// AI analyzes ticket and takes appropriate actions
const response = await generateText({
model: openai('gpt-4'),
messages: [
{
role: 'system',
content: `You are a customer support AI.
Analyze the ticket and take appropriate actions:
- Search for similar issues
- Check customer history
- Create tasks if needed
- Draft responses
- Escalate if necessary`
},
{
role: 'user',
content: ticketContent
}
],
tools: {
...aiTools,
searchKnowledgeBase: tool({
description: 'Search internal knowledge base',
parameters: z.object({ query: z.string() }),
execute: async ({ query }) => {
// Search implementation
}
})
}
});
return response;
}
};
Sales Assistant AI
Copy
Ask AI
const salesAssistant = {
async qualifyLead(leadInfo: any) {
const response = await generateText({
model: openai('gpt-4'),
messages: [
{
role: 'system',
content: `You are a sales AI assistant.
Analyze the lead and:
1. Check if they exist in CRM
2. Enrich data from multiple sources
3. Score the lead
4. Suggest next actions
5. Create follow-up tasks`
},
{
role: 'user',
content: JSON.stringify(leadInfo)
}
],
tools: aiTools,
maxToolRoundtrips: 10 // Allow multiple tool calls
});
return response;
}
};
Data Analyst AI
Copy
Ask AI
const dataAnalyst = {
async analyzeMetrics(request: string) {
// AI pulls data from multiple sources and analyzes
const analysis = await generateText({
model: openai('gpt-4'),
messages: [
{
role: 'system',
content: `You are a data analyst AI.
Based on the request:
1. Identify relevant data sources
2. Pull necessary metrics
3. Perform calculations
4. Generate insights
5. Create visualizations if needed`
},
{
role: 'user',
content: request
}
],
tools: aiTools
});
return analysis;
}
};
Best Practices
1. Tool Discovery
Always allow the LLM to discover available tools:Copy
Ask AI
const toolDiscoveryPrompt = `
Before executing tasks, discover:
1. What connectors are available
2. What actions each connector supports
3. What parameters are required
This ensures you use the right tool for the job.
`;
2. Progressive Disclosure
Don’t overwhelm the LLM with all tools at once:Copy
Ask AI
async function progressiveToolLoading(task: string) {
// First, identify relevant category
const category = await identifyCategory(task);
// Load only relevant tools
const relevantTools = await loadToolsByCategory(category);
// Execute with focused tool set
return await executeWithTools(task, relevantTools);
}
3. Context Preservation
Maintain context across tool calls:Copy
Ask AI
const contextualAgent = {
history: [],
async execute(task: string) {
const response = await generateText({
model: openai('gpt-4'),
messages: [
...this.history,
{
role: 'user',
content: task
}
],
tools: aiTools
});
// Preserve context
this.history.push({
role: 'user',
content: task
});
this.history.push({
role: 'assistant',
content: response.text
});
return response;
}
};
Performance Optimization
Caching Tool Definitions
Copy
Ask AI
class ToolCache {
private cache = new Map();
private ttl = 5 * 60 * 1000; // 5 minutes
async getTools(category?: string) {
const key = category || 'all';
const cached = this.cache.get(key);
if (cached && Date.now() - cached.timestamp < this.ttl) {
return cached.tools;
}
const tools = await alloyClient.listTools({ category });
this.cache.set(key, {
tools,
timestamp: Date.now()
});
return tools;
}
}
Parallel Tool Execution
Copy
Ask AI
async function parallelExecution(tasks: string[]) {
const executions = tasks.map(task =>
generateText({
model: openai('gpt-4'),
messages: [{ role: 'user', content: task }],
tools: aiTools
})
);
return await Promise.all(executions);
}
Monitoring & Observability
Track AI tool usage for optimization:Copy
Ask AI
class ToolUsageMonitor {
async trackExecution(toolName: string, params: any, result: any) {
const metrics = {
tool: toolName,
timestamp: new Date(),
parameters: params,
success: !!result,
executionTime: result.executionTime,
tokensUsed: result.tokensUsed
};
// Send to analytics
await this.sendMetrics(metrics);
}
async getUsageReport() {
// Analyze which tools are used most
// Identify patterns and optimization opportunities
return await this.analyzeUsage();
}
}
Getting Started
1
Install Dependencies
Copy
Ask AI
npm install ai @ai-sdk/openai zod
npm install @alloy/mcp-client
2
Set Up MCP Server
Copy
Ask AI
const server = await createMCPServer({
name: "AI Assistant Tools",
restrictions: {
// Configure based on your security needs
}
});
3
Initialize AI with Dynamic Tools
Copy
Ask AI
const ai = await initializeAI({
model: 'gpt-4',
temperature: 0,
tools: await loadAlloyTools()
});
4
Start Building
Let your AI dynamically discover and use tools based on context, making your application truly intelligent and adaptive.