Client API
Complete API reference for the KoAI client
Client API
The KoAI client is the main interface for interacting with AI models. This reference covers all available methods and configurations.
KoAI Class
Constructor
new KoAI(config: KoAIConfig)
Parameters
Parameter | Type | Description |
---|---|---|
config | KoAIConfig | Configuration object |
KoAIConfig
interface KoAIConfig {
// Required
apiKey: string;
// Optional
provider?: 'openai' | 'anthropic' | 'google' | 'custom';
model?: string;
temperature?: number;
maxTokens?: number;
timeout?: number;
// Advanced
systemPrompt?: string;
tools?: Tool[];
memory?: MemoryConfig;
cache?: CacheConfig;
retry?: RetryConfig;
}
Example
import { KoAI } from '@KoAI/core';
const ai = new KoAI({
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4-turbo',
temperature: 0.7,
maxTokens: 2000,
systemPrompt: 'You are a helpful assistant.'
});
Core Methods
chat()
Generate a single response to a message.
async chat(message: string, options?: ChatOptions): Promise<ChatResponse>
Parameters
Parameter | Type | Description |
---|---|---|
message | string | The input message |
options | ChatOptions | Optional configuration |
ChatOptions
interface ChatOptions {
temperature?: number;
maxTokens?: number;
systemPrompt?: string;
tools?: Tool[];
stream?: boolean;
}
ChatResponse
interface ChatResponse {
content: string;
model: string;
usage: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
finishReason: 'stop' | 'length' | 'tool_calls';
toolCalls?: ToolCall[];
metadata: {
id: string;
created: number;
duration: number;
};
}
Example
const response = await ai.chat('Explain TypeScript interfaces');
console.log(response.content);
console.log(`Tokens used: ${response.usage.totalTokens}`);
streamChat()
Stream responses in real-time.
async streamChat(message: string, options?: ChatOptions): Promise<AsyncIterable<ChatChunk>>
ChatChunk
interface ChatChunk {
content: string;
delta: string;
isComplete: boolean;
metadata?: {
id: string;
model: string;
};
}
Example
const stream = await ai.streamChat('Write a story about AI');
for await (const chunk of stream) {
process.stdout.write(chunk.delta);
if (chunk.isComplete) {
console.log('\n--- Stream complete ---');
}
}
batchChat()
Process multiple messages in parallel.
async batchChat(messages: string[], options?: BatchChatOptions): Promise<ChatResponse[]>
BatchChatOptions
interface BatchChatOptions extends ChatOptions {
maxConcurrency?: number;
preserveOrder?: boolean;
}
Example
const questions = [
'What is React?',
'Explain TypeScript',
'How does async/await work?'
];
const responses = await ai.batchChat(questions, {
maxConcurrency: 3,
preserveOrder: true
});
responses.forEach((response, index) => {
console.log(`Q: ${questions[index]}`);
console.log(`A: ${response.content}\n`);
});
Conversation Management
createConversation()
Create a conversation with persistent context.
createConversation(options?: ConversationOptions): Conversation
ConversationOptions
interface ConversationOptions {
id?: string;
systemPrompt?: string;
memory?: MemoryConfig;
maxContextLength?: number;
}
Conversation Methods
interface Conversation {
chat(message: string): Promise<ChatResponse>;
streamChat(message: string): Promise<AsyncIterable<ChatChunk>>;
getHistory(): ConversationMessage[];
clear(): void;
export(): ConversationExport;
import(data: ConversationExport): void;
}
Example
const conversation = ai.createConversation({
id: 'user-123-session',
systemPrompt: 'You are a coding tutor.'
});
await conversation.chat('I want to learn TypeScript');
await conversation.chat('What are interfaces?');
await conversation.chat('Can you give me an example?');
// Get conversation history
const history = conversation.getHistory();
console.log(`Messages: ${history.length}`);
Tool Integration
addTool()
Add a tool to the client.
addTool(tool: Tool): void
removeTool()
Remove a tool by name.
removeTool(toolName: string): void
getTools()
Get all registered tools.
getTools(): Tool[]
Example
const weatherTool: Tool = {
name: 'get_weather',
description: 'Get weather information',
parameters: {
type: 'object',
properties: {
city: { type: 'string' }
},
required: ['city']
},
handler: async ({ city }) => {
// Implementation
return `Weather in ${city}: Sunny, 22°C`;
}
};
ai.addTool(weatherTool);
const response = await ai.chat('What\'s the weather in Paris?');
// AI will automatically use the weather tool
Memory Management
getMemory()
Access the memory system.
getMemory(): Memory | null
setMemory()
Configure memory system.
setMemory(memory: MemoryConfig): void
Example
import { ConversationMemory } from '@KoAI/core';
const memory = new ConversationMemory({
maxTokens: 4000,
summaryInterval: 10
});
ai.setMemory(memory);
// Memory will automatically maintain context
await ai.chat('My name is Alice');
await ai.chat('What did I tell you about my name?');
Configuration Methods
updateConfig()
Update client configuration.
updateConfig(config: Partial<KoAIConfig>): void
getConfig()
Get current configuration.
getConfig(): KoAIConfig
Example
// Update temperature for more creative responses
ai.updateConfig({ temperature: 1.2 });
// Get current model
const config = ai.getConfig();
console.log(`Current model: ${config.model}`);
Utility Methods
tokenCount()
Count tokens in text.
async tokenCount(text: string): Promise<number>
validateMessage()
Validate message format and length.
validateMessage(message: string): ValidationResult
ValidationResult
interface ValidationResult {
valid: boolean;
errors: string[];
tokenCount: number;
estimatedCost: number;
}
Example
const tokenCount = await ai.tokenCount('Hello, world!');
console.log(`Token count: ${tokenCount}`);
const validation = ai.validateMessage('Very long message...');
if (!validation.valid) {
console.log('Validation errors:', validation.errors);
}
Event Handling
on()
Listen for events.
on(event: string, handler: Function): void
off()
Remove event listener.
off(event: string, handler: Function): void
Events
Event | Description | Handler Signature |
---|---|---|
response | New response received | (response: ChatResponse) => void |
error | Error occurred | (error: Error) => void |
tokenUsage | Token usage update | (usage: TokenUsage) => void |
toolCall | Tool was called | (toolCall: ToolCall) => void |
Example
ai.on('response', (response) => {
console.log(`Received response: ${response.content.length} chars`);
});
ai.on('error', (error) => {
console.error('AI Error:', error.message);
});
ai.on('tokenUsage', (usage) => {
console.log(`Tokens used: ${usage.totalTokens}`);
});
Error Handling
KoAIError
Base error class for all KoAI errors.
class KoAIError extends Error {
code: string;
statusCode?: number;
retryable: boolean;
metadata?: Record<string, any>;
}
Error Codes
Code | Description | Retryable |
---|---|---|
INVALID_API_KEY | API key is invalid | No |
RATE_LIMIT_EXCEEDED | Rate limit exceeded | Yes |
CONTEXT_LENGTH_EXCEEDED | Message too long | No |
MODEL_NOT_FOUND | Model not available | No |
NETWORK_ERROR | Network connectivity issue | Yes |
TIMEOUT | Request timed out | Yes |
Example
try {
const response = await ai.chat('Hello');
} catch (error) {
if (error instanceof KoAIError) {
console.log(`Error code: ${error.code}`);
console.log(`Retryable: ${error.retryable}`);
if (error.retryable) {
// Implement retry logic
await new Promise(resolve => setTimeout(resolve, 1000));
// Retry the request
}
}
}
Advanced Features
Custom Providers
Register custom AI providers.
interface CustomProvider {
name: string;
endpoint: string;
headers?: Record<string, string>;
transform?: {
request: (data: any) => any;
response: (data: any) => ChatResponse;
};
}
ai.registerProvider(customProvider);
Middleware
Add middleware for request/response processing.
ai.use(async (context, next) => {
console.log('Before request:', context.message);
const response = await next();
console.log('After response:', response.content);
return response;
});
Debugging
Enable debug mode for detailed logging.
const ai = new KoAI({
apiKey: process.env.OPENAI_API_KEY,
debug: true,
logLevel: 'verbose'
});
Type Definitions
Complete TypeScript Definitions
// Export all types for TypeScript users
export {
KoAI,
KoAIConfig,
ChatOptions,
ChatResponse,
ChatChunk,
Tool,
ToolCall,
Memory,
MemoryConfig,
Conversation,
ConversationOptions,
KoAIError
} from '@KoAI/core';
Next Steps
- Models API - Model-specific configurations
- Streaming API - Real-time streaming
- Embeddings API - Vector embeddings
- Examples - Practical examples