API Integration Guide

TeachWise.ai

Quick Start & Code Examples

πŸ”Œ

Quick Start with Anthropic Claude API

Why Anthropic Claude?

Claude excels at educational content generation, understands complex curriculum requirements, and provides consistent, high-quality lesson plans aligned with African curricula.

1. Installation

# Install the Anthropic SDK
npm install @anthropic-ai/sdk

# Or with yarn
yarn add @anthropic-ai/sdk

# For Python
pip install anthropic --break-system-packages

2. Environment Setup

# .env.local
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
NEXT_PUBLIC_APP_URL=http://localhost:3000

3. Basic Usage (TypeScript/Node.js)

import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

async function generateLessonPlan(subject: string, grade: number, topic: string) {
  const message = await anthropic.messages.create({
    model: 'claude-sonnet-4.5-20250929',
    max_tokens: 4096,
    messages: [{
      role: 'user',
      content: `Generate a detailed lesson plan for Grade ${grade} ${subject} on the topic: "${topic}". 
      This should be aligned with the Kenyan CBC curriculum. Include:
      - Learning objectives
      - Materials needed
      - Step-by-step activities (30-40 minutes)
      - Assessment methods
      - Differentiation strategies`
    }]
  });
  
  return message.content[0].text;
}

// Usage
const lessonPlan = await generateLessonPlan('Mathematics', 5, 'Introduction to Fractions');
console.log(lessonPlan);

Curriculum-Specific Prompt Templates

πŸ‡°πŸ‡ͺ Kenya CBC Lesson Plan

const KENYA_CBC_PROMPT = `You are an expert teacher familiar with Kenya's Competency-Based Curriculum (CBC).
Generate a comprehensive lesson plan for:
- Grade: {grade}
- Subject: {subject}
- Strand: {strand}
- Sub-strand: {substrand}
- Specific Learning Outcome: {learningOutcome}
- Duration: {duration} minutes

Format the lesson plan with:
1. Learning Outcomes (Knowledge, Skills, Attitudes)
2. Key Inquiry Questions
3. Core Competencies to develop
4. Values to instill
5. Learning Resources and Materials
6. Learning Activities (Introduction, Main Activity, Conclusion)
7. Assessment Rubric (Exceeds, Meets, Approaches, Below Expectations)
8. Reflection questions for learners

Use CBC terminology and align with Grade {grade} expectations.`;

async function generateKenyaCBCLesson(params: {
  grade: number;
  subject: string;
  strand: string;
  substrand: string;
  learningOutcome: string;
  duration: number;
}) {
  const prompt = KENYA_CBC_PROMPT
    .replace('{grade}', params.grade.toString())
    .replace('{subject}', params.subject)
    .replace('{strand}', params.strand)
    .replace('{substrand}', params.substrand)
    .replace('{learningOutcome}', params.learningOutcome)
    .replace('{duration}', params.duration.toString());
    
  const message = await anthropic.messages.create({
    model: 'claude-sonnet-4.5-20250929',
    max_tokens: 4096,
    messages: [{ role: 'user', content: prompt }]
  });
  
  return message.content[0].text;
}

πŸ‡ΊπŸ‡¬ Uganda Curriculum Lesson Plan

const UGANDA_CURRICULUM_PROMPT = `You are an expert teacher familiar with Uganda's National Curriculum.
Generate a detailed lesson plan for:
- Class: Primary {grade}
- Subject: {subject}
- Topic: {topic}
- Duration: {duration} minutes

Structure the lesson plan with:
1. Learning Objectives (By the end of the lesson, learners should be able to...)
2. Teaching and Learning Materials
3. Introduction (5-10 minutes)
4. Development/Presentation (20-25 minutes)
5. Conclusion (5 minutes)
6. Assessment and Evaluation
7. Teacher's Reflection

Align with Uganda's primary education standards and make it practical for Ugandan classrooms.`;

async function generateUgandaLesson(params: {
  grade: number;
  subject: string;
  topic: string;
  duration: number;
}) {
  const prompt = UGANDA_CURRICULUM_PROMPT
    .replace('{grade}', params.grade.toString())
    .replace('{subject}', params.subject)
    .replace('{topic}', params.topic)
    .replace('{duration}', params.duration.toString());
    
  const message = await anthropic.messages.create({
    model: 'claude-sonnet-4.5-20250929',
    max_tokens: 4096,
    messages: [{ role: 'user', content: prompt }]
  });
  
  return message.content[0].text;
}

AI Chat Assistant Implementation

Streaming Chat with Context

// /api/chat/route.ts - Next.js API Route
import Anthropic from '@anthropic-ai/sdk';
import { NextRequest, NextResponse } from 'next/server';

const anthropic = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY!,
});

// System prompt with curriculum context
const SYSTEM_PROMPT = `You are an AI teaching assistant for TeachWise.ai, specialized in helping teachers with:
- Creating lesson plans aligned with Kenyan CBC and Ugandan curricula
- Answering curriculum-specific questions
- Providing teaching strategies and classroom management advice
- Generating assessment materials
- Offering differentiation strategies for diverse learners

You have deep knowledge of:
- Kenya's Competency-Based Curriculum (CBC) for all grades
- Uganda's National Curriculum
- East African educational contexts and challenges
- Best practices in pedagogy

Always be practical, supportive, and provide actionable advice for teachers.`;

export async function POST(req: NextRequest) {
  try {
    const { messages, userContext } = await req.json();
    
    // Add user context (grade, subject, country) to the conversation
    const contextMessage = userContext 
      ? `[Context: Teacher of Grade ${userContext.grade} ${userContext.subject} in ${userContext.country}]`
      : '';
    
    const stream = await anthropic.messages.stream({
      model: 'claude-haiku-4-20250929', // Use Haiku for chat (faster & cheaper)
      max_tokens: 2048,
      system: SYSTEM_PROMPT,
      messages: [
        ...messages,
        ...(contextMessage ? [{ role: 'user', content: contextMessage }] : [])
      ]
    });

    // Stream the response
    const encoder = new TextEncoder();
    const readableStream = new ReadableStream({
      async start(controller) {
        for await (const chunk of stream) {
          if (chunk.type === 'content_block_delta' && 
              chunk.delta.type === 'text_delta') {
            controller.enqueue(
              encoder.encode(`data: ${JSON.stringify({ text: chunk.delta.text })}\n\n`)
            );
          }
        }
        controller.close();
      },
    });

    return new Response(readableStream, {
      headers: {
        'Content-Type': 'text/event-stream',
        'Cache-Control': 'no-cache',
        'Connection': 'keep-alive',
      },
    });
  } catch (error) {
    console.error('Chat error:', error);
    return NextResponse.json({ error: 'Failed to generate response' }, { status: 500 });
  }
}

Client-Side Chat Component

// components/ChatAssistant.tsx
'use client';

import { useState } from 'react';

export function ChatAssistant() {
  const [messages, setMessages] = useState>([]);
  const [input, setInput] = useState('');
  const [isLoading, setIsLoading] = useState(false);

  async function sendMessage() {
    if (!input.trim()) return;

    const userMessage = { role: 'user', content: input };
    setMessages(prev => [...prev, userMessage]);
    setInput('');
    setIsLoading(true);

    try {
      const response = await fetch('/api/chat', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
          messages: [...messages, userMessage],
          userContext: {
            grade: 5,
            subject: 'Mathematics',
            country: 'Kenya'
          }
        }),
      });

      const reader = response.body?.getReader();
      const decoder = new TextDecoder();
      let assistantMessage = '';

      while (true) {
        const { done, value } = await reader!.read();
        if (done) break;

        const chunk = decoder.decode(value);
        const lines = chunk.split('\n\n');
        
        for (const line of lines) {
          if (line.startsWith('data: ')) {
            const data = JSON.parse(line.slice(6));
            assistantMessage += data.text;
            setMessages(prev => {
              const newMessages = [...prev];
              const lastMessage = newMessages[newMessages.length - 1];
              if (lastMessage?.role === 'assistant') {
                newMessages[newMessages.length - 1] = {
                  role: 'assistant',
                  content: assistantMessage
                };
              } else {
                newMessages.push({ role: 'assistant', content: assistantMessage });
              }
              return newMessages;
            });
          }
        }
      }
    } catch (error) {
      console.error('Error:', error);
    } finally {
      setIsLoading(false);
    }
  }

  return (
    
{messages.map((msg, idx) => (
{msg.content}
))}
setInput(e.target.value)} onKeyPress={(e) => e.key === 'Enter' && sendMessage()} placeholder="Ask me anything about teaching..." className="flex-1 border rounded-lg px-4 py-2 focus:outline-none focus:border-indigo-500" disabled={isLoading} />
); }

Response Caching for Cost Optimization

πŸ’‘ Critical for Cost Control

Caching common curriculum queries can reduce API costs by 60-80%. A teacher asking "Generate Grade 5 Math fractions lesson" should get a cached result if asked before.

// lib/ai-cache.ts
import Redis from 'ioredis';
import crypto from 'crypto';

const redis = new Redis(process.env.REDIS_URL);

export async function getCachedOrGenerate(
  prompt: string,
  generateFn: () => Promise,
  ttl: number = 7 * 24 * 60 * 60 // 7 days
): Promise {
  // Create cache key from prompt hash
  const cacheKey = `ai:${crypto
    .createHash('md5')
    .update(prompt.toLowerCase().trim())
    .digest('hex')}`;

  // Check cache first
  const cached = await redis.get(cacheKey);
  if (cached) {
    console.log('βœ“ Cache hit:', cacheKey);
    return cached;
  }

  // Generate new response
  console.log('βœ— Cache miss, generating:', cacheKey);
  const response = await generateFn();

  // Store in cache
  await redis.setex(cacheKey, ttl, response);
  
  return response;
}

// Usage in API route
export async function POST(req: NextRequest) {
  const { subject, grade, topic } = await req.json();
  
  const prompt = `Generate a Grade ${grade} ${subject} lesson plan on ${topic}`;
  
  const lessonPlan = await getCachedOrGenerate(
    prompt,
    async () => {
      const message = await anthropic.messages.create({
        model: 'claude-sonnet-4.5-20250929',
        max_tokens: 4096,
        messages: [{ role: 'user', content: prompt }]
      });
      return message.content[0].text;
    }
  );
  
  return NextResponse.json({ lessonPlan });
}

Rate Limiting for Free Tier

// middleware/rate-limit.ts
import { NextRequest, NextResponse } from 'next/server';
import Redis from 'ioredis';

const redis = new Redis(process.env.REDIS_URL);

interface RateLimitConfig {
  free: { requests: number; window: number };
  pro: { requests: number; window: number };
  school: { requests: number; window: number };
}

const RATE_LIMITS: RateLimitConfig = {
  free: { requests: 5, window: 24 * 60 * 60 },      // 5 per day
  pro: { requests: 1000, window: 24 * 60 * 60 },    // 1000 per day
  school: { requests: 10000, window: 24 * 60 * 60 } // 10000 per day
};

export async function checkRateLimit(
  userId: string,
  tier: 'free' | 'pro' | 'school'
): Promise<{ allowed: boolean; remaining: number }> {
  const limit = RATE_LIMITS[tier];
  const key = `ratelimit:${tier}:${userId}`;
  
  const current = await redis.incr(key);
  
  if (current === 1) {
    await redis.expire(key, limit.window);
  }
  
  const allowed = current <= limit.requests;
  const remaining = Math.max(0, limit.requests - current);
  
  return { allowed, remaining };
}

// Use in API route
export async function POST(req: NextRequest) {
  const userId = req.headers.get('x-user-id');
  const userTier = req.headers.get('x-user-tier') as 'free' | 'pro' | 'school';
  
  const { allowed, remaining } = await checkRateLimit(userId!, userTier);
  
  if (!allowed) {
    return NextResponse.json(
      { error: 'Rate limit exceeded. Upgrade to Pro for unlimited access.' },
      { 
        status: 429,
        headers: { 'X-RateLimit-Remaining': '0' }
      }
    );
  }
  
  // Process request...
  return NextResponse.json(
    { data: '...' },
    { headers: { 'X-RateLimit-Remaining': remaining.toString() } }
  );
}

Token Usage & Cost Tracking

// lib/cost-tracker.ts
interface TokenUsage {
  inputTokens: number;
  outputTokens: number;
  model: string;
  userId: string;
  timestamp: Date;
}

// Anthropic pricing (per 1M tokens)
const PRICING = {
  'claude-sonnet-4.5-20250929': { input: 3, output: 15 },
  'claude-haiku-4-20250929': { input: 0.25, output: 1.25 }
};

export async function trackUsage(usage: TokenUsage) {
  const model = usage.model as keyof typeof PRICING;
  const pricing = PRICING[model];
  
  const cost = (
    (usage.inputTokens / 1_000_000) * pricing.input +
    (usage.outputTokens / 1_000_000) * pricing.output
  );
  
  // Store in database for analytics
  await db.usage.create({
    data: {
      userId: usage.userId,
      model: usage.model,
      inputTokens: usage.inputTokens,
      outputTokens: usage.outputTokens,
      cost: cost,
      timestamp: usage.timestamp
    }
  });
  
  console.log(`πŸ’° API cost: $${cost.toFixed(4)} for user ${usage.userId}`);
  
  return cost;
}

// Use after API call
const message = await anthropic.messages.create({
  model: 'claude-sonnet-4.5-20250929',
  max_tokens: 4096,
  messages: [{ role: 'user', content: prompt }]
});

await trackUsage({
  inputTokens: message.usage.input_tokens,
  outputTokens: message.usage.output_tokens,
  model: 'claude-sonnet-4.5-20250929',
  userId: userId,
  timestamp: new Date()
});

Error Handling & Retry Logic

// lib/api-client.ts
import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
  maxRetries: 3,
});

export async function generateWithRetry(
  prompt: string,
  options: {
    model?: string;
    maxTokens?: number;
    retries?: number;
  } = {}
): Promise {
  const {
    model = 'claude-sonnet-4.5-20250929',
    maxTokens = 4096,
    retries = 3
  } = options;

  for (let attempt = 1; attempt <= retries; attempt++) {
    try {
      const message = await anthropic.messages.create({
        model,
        max_tokens: maxTokens,
        messages: [{ role: 'user', content: prompt }]
      });
      
      return message.content[0].text;
      
    } catch (error: any) {
      console.error(`Attempt ${attempt} failed:`, error.message);
      
      // Don't retry on certain errors
      if (error.status === 401 || error.status === 403) {
        throw new Error('API authentication failed');
      }
      
      if (error.status === 400) {
        throw new Error('Invalid request: ' + error.message);
      }
      
      // Retry on rate limits and server errors
      if (attempt < retries) {
        const delay = Math.min(1000 * Math.pow(2, attempt), 10000); // Exponential backoff
        console.log(`Retrying in ${delay}ms...`);
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
      
      throw error;
    }
  }
  
  throw new Error('Max retries exceeded');
}

// Usage
try {
  const lessonPlan = await generateWithRetry(
    'Generate a Grade 5 Math lesson on fractions',
    { model: 'claude-sonnet-4.5-20250929' }
  );
  console.log(lessonPlan);
} catch (error) {
  console.error('Failed to generate lesson plan:', error);
  // Show user-friendly error message
}

Β© 2025 TeachWise.ai API Documentation

For more details, visit: Anthropic Documentation